Project Notes

1 CS790: Project Proposals

1.1 Distributing Mumps Locks

Review the methods by which the GT.M Mumps (NoSQL) database synchronizes shards and locks across the Production data-set. Identify possible improvements.

In Mumps, the lock operation (L+) is server-local which encourages centralizing processing on a single production server within a data center. This leads to significant capital investment costs to purchase servers large enough to handle the massive processing requirement of both entering, and analyzing medical data on demand and proactively. By storing data in a non-shared data structure, it may be possible to distribute the end-user (non-proactive) data processing load across servers. This may reduce infrastructure costs by allowing multiple inexpensive servers to perform the work of one costly server.

The easiest way to build a non-shared data structure is to use a lock-free design instead of a lockful one. We could do this by treating patient data as a DAG (with edit times and a revision history) instead of a flat data structure. We should be able to handle several different types of data, each with their own workflows for different appointment types:

Office Visits:

  1. Vitals
  2. Medication Prescriptions
  3. Duplicate Medication Orders
  4. Lab Orders
  5. Redundant Lab Orders
  6. Notes
  7. Patient Instructions

Telephone calls:

  1. Medication Prescriptions
  2. Duplicate Prescriptions
  3. Lab Orders
  4. Redundant Lab Orders
  5. Notes

Storing patient data edits in a DAG might allow interesting cluster structures that could be ultimately provide higher throughput because they would allow batching data operations up into transactions. For example, a hospital could have its own server that could check out the patient record when the patient has an appointment the next day. Then, each department in the hospital could have its own server that checks out the parts of the patient's record that they would need for the visit (ophthalmology departments would not have complete overlap with oncology departments, for example). When changes were committed locally, they could be pushed back up the server tree to the root node for any other server to check out, and be notified that data changed.

The server subscription model could be built similarly to Chubby, allowing us to specify a maximum throughput for the cluster and maximum lease time for any data elements. Incorporating the BigTable concept of timestamped data into the data model would make reconciling data less dangerous, as both the stale and current state would be available for reference. However, making all this data available for reference is a difficult and dangerous proposition because each data-set then has at least three different views that may need to be presented to the user in different situations:

  1. Entry view: the normal view as users enter data into the system on a specific appointment and compare against previously recorded data on that appointment (and other relevant fields, like "previous liver enzymes" when ordering amoxicillin).
  2. Review view: the normal patient-timeline view as users examine historical data in the system on a per-appointment basis, allowing for both migration between appointments and between versions of an appointment. Allowing users to edit entries in this view might allow this view to also work as the entry view. However, users editing historical data could easily result in a 3-way merge situation, which would be difficult to explain in a usable way in the UI.
  3. Revision history view: Users examine data in the order it was entered into the chart. This may get confusing and difficult to display to end-users as single entries might affect multiple appointments, and later entries may update earlier appointments (e.g., indicating that the patient was very late to an appointment instead of skipping it entirely, like originally entered). The best way to display an omniscient perspective on the patient timeline is still an open question.

1.1.1 Success

In this model, success looks like we were able to do the following:

  1. To start with a flat database model and then apply a BigTable revision history or Git-like DAG to it.
  2. Have sensible UI for all the editing cases.
  3. Be able to sync changes back to other servers and address conflicts without locks.

1.1.2 Steps

  1. Build a simple patient data model encapsulating the 5 data types.
  2. Build a simple data review interface.
  3. Add a branch-like or time-based transaction model to the data structures so that multiple users can edit them in parallel and only need to resolve conflicts when concurrent writes occur.
  4. Add the revision-history view of the data.

Notably, the actual server-to-server networked communication of the data transactions is not included because that probably makes this project too big, and is derived directly from the transaction model that allows multiple users to edit the patient at once on a single server.

1.2 Strong Cold Boot Randomness on Headless Servers

Identify methods to ensure randomness on headless servers from the first cold boot (making every VM a one-shot VM since it stores a private key, or requiring a connection to a remote private key generation server).

There are two primary approaches to strong cold boot randomness:

  1. Build the key into the VM itself that's available on first boot, along with a large random file. On boot, read in the large random file as a source of entropy for the first boot.
  2. Build an external key generation server and service that establishes a private connection to the VM to deliver the key and some random data as an entropy source.

1.2.1 Assumptions

  • The disk image of the VM itself is never accessible to attackers.
  • VMs are initialized by remote service (Jenkins) and not directly controlled by the user.

1.2.2 Success

In this model, success looks like:

  1. I can spin up new VMs in under 5 minutes.
  2. Those VMs will have unique keys.
  3. Those VMs will have enough entropy available to provide a secure random seed to eliminate potential SSL replay attacks while serving requests for a simple website.

1.2.3 Steps

  1. Build a normal Propellor disk image.
  2. Add to VM build process: Add new X509 key to VM's disk image.
  3. Add to VM build process: write 10MB of new random data to VM's disk image.
  4. Add to VM boot process: read in the random data as part of boot.
  5. Port /dev/arandom from *BSD to Linux to make sure those 10MB of random data aren't written until they're cryptographically meaningful.

2 Project 1: Distributing Mumps Locks

I can do this with a few simple data elements, rather than something more complex. I can start with a single-response data model, like patient name, age, sex, and vitals.

2.1 Environment Setup

The first step, of course, cleaning and configuring a new environment.

<new-env> =

#
# Wipe clean and configure the environment.
#
sudo killall mumps
sudo /usr/share/zookeeper/bin/zkServer.sh stop
sleep 5
sudo killall java
sleep 5
sudo rm -r /var/lib/zookeeper/*
rm -r gtm

# start again
mkdir -p gtm/r gtm/o gtm/g
emacs notes.org --batch -f org-babel-tangle

<<env-vars>>
<<allow-mumps-compile>>
<<allow-zookeeper-log>>

<<configure-zk-cluster>>
<<create-database>>
<<clean-database>>
<<setup-functions>>
#
# Verify the database stores data.
#
echo "s ^XTEST=0" | gtm
./gtm/gtm-env -r "^test"
./gtm/gtm-env -r "^test"
echo "If XTEST = 2, database storage is persistent."

Required environmental variables for running GTM in this local environment.

<env-vars> =

export gtm_dist=/usr/lib/x86_64-linux-gnu/fis-gtm/V6.3-007_x86_64/
export PATH=$gtm_dist:$PATH
export gtmgbldir="./gtm/g/globals.gld"
export gtmroutines="./gtm/r $gtm_dist"

Those variables are used in the GTM environment script.

<setup-gtm> =

<<env-vars>>
exec gtm "$@"

2.1.1 Database Setup Functions

These functions need to be executed to initialize the database for use.

<setup-functions> =

#
# initialize database.
./gtm/gtm-env -r "setup^Cluster"
./gtm/gtm-env -r "setup^SetData"
./gtm/gtm-env -r "setup^TestData"
#
# regularly query cluster for changes.
./gtm/gtm-env -r "start^Cluster"

The routines directory needs to be writable to compile and execute expected built-in routines.

<allow-mumps-compile> =

#
# make routines directory writable.
echo -n "Making Mumps directory writable..."
sudo groupadd mumps
sudo usermod -a -G mumps `whoami`
sudo chgrp -R mumps $gtm_dist
sudo chmod g+w $gtm_dist
echo  " Done!"

Allow the user to write to the zookeeper log, as well.

<allow-zookeeper-log> =

#
# make zk logfile writable.
echo -n "Making Zookeeper logfile writable..."
sudo groupadd zookeeper
sudo usermod -a -G zookeeper `whoami`
sudo chgrp -R zookeeper /var/log/zookeeper/
sudo chmod -R g+w /var/log/zookeeper/
echo  " Done!"

echo "Log out and back in to update group memberships."

2.1.2 Configure Database

Create the database by running ^GDE, which can't be configured directly from the command-line.

<create-database> =

#
# erase gtmroutines so we can find gde, to create globals.dat
gtmroutines=""; export gtmroutines
cat gtm/gtm-setup.mscript | gtm
mupip create

Run ^GDE to create globals.dat to store permanent data.

<gtm-env-setup> =

d ^GDE
change -segment default -allocation=1024 -file=./gtm/g/globals.dat
exit
h

2.1.3 Executables

Verify the database works.

<test-file> =

test
  w !,"XTEST before increment: "_^XTEST
  s %=$I(^XTEST) ; increment the XTEST database record.
  w !,"XTEST after increment:  "_^XTEST
  w !
  q

Execute the code in an environment that persists between runs.

<run-test> =

d ^test

GTM>

XTEST before increment: 0
XTEST after increment:  1
GTM>

2.1.4 Cluster Configuration

The Synchronize Test Data section covers configuring the cluster within the Mumps database context. However, the Zookeeper cluster still needs to be configured on its own. It needs to be started and the nodes we're going to use need to be created.

<configure-zk-cluster> =

export ZKPATH="/usr/share/zookeeper/bin"
#
# (clean-zookeeper)
export maxAudit=`/usr/share/zookeeper/bin/zkCli.sh -server localhost:2181 get /audit 2>/dev/null | egrep "^[0-9]+"`
for x in `seq 0 $maxAudit`
do
    /usr/share/zookeeper/bin/zkCli.sh -server localhost:2181 delete /audit/$x
done
$ZKPATH/zkCli.sh -server localhost:2181 delete /audit
#
# start zookeeper
echo "Starting Zookeeper server: zkServer.sh start"
sudo $ZKPATH/zkServer.sh start
$ZKPATH/zkCli.sh -server localhost:2181 create /records 0
$ZKPATH/zkCli.sh -server localhost:2181 create /edits 0
$ZKPATH/zkCli.sh -server localhost:2181 create /audit 0
$ZKPATH/zkCli.sh -server localhost:2181 create /scalar 10000

When rebuilding the database, the only node that interferes with new instances is the "/audit" node. That should be cleaned up between runs (clean-zookeeper).

2.2 DONE Build HTML on Push

Fortunately, I was able to reuse my build script from CS739-P2. This publishes to https://nickdaly.gitlab.io/cs790-p1.

2.3 DONE Build Task Calendar

2.4 DONE Build Data Models

Record Types:

  1. Patient
  2. Medication
  3. Prescription
  4. Notes
  5. Labs
  6. Lab orders

We also have linkage records that track how records relate to each other:

  1. Patient Linkage

We also timestamp each time we change a value, allowing us to locate the most recent data elements fairly quickly. Note that all of these data elements are timestamped because we assume data may always be wrong and need to be corrected later.

2.4.1 Patient Data Elements

Stored in ^PATIENT.

  1. Patient Linkage Record: ID
  2. Name: string
  3. Birthdate: date
  4. Sex: string
  5. Appointment Date: date

Ideally, appointment date would have a chronological edit ID index so we could quickly source all the information about a specific appointment when displaying appointment-oriented reports. Next, appointment-specific vitals!

  1. Systolic BP: positive number
  2. Diastolic BP: positive number
  3. Respiration: positive number
  4. Temperature: positive number

2.4.2 Patient Linkage Data Elements

Data linked to the patient. Stored in ^PATIENTLINK.

  1. Notes: ID
  2. Prescriptions: ID
  3. Lab Orders: ID

Other record types don't have linkage records because they are all 1:1 mappings.

2.4.3 Notes

Progress notes about the patient or instructions for the patient. Stored in ^NOTES.

  1. Note author's name: string
  2. Display to patient?: boolean
  3. Note text: string

2.4.4 Medication Prescriptions

Medications like Tylenol, albuterol, and Lipitor. Stored in ^MEDRX.

  1. Patient: ID
  2. Medication: ID
  3. Start Instant: instant
  4. End Instant: instant
  5. Dosing equation, counting from start instant: string
  6. Dispense Quantity: positive number
  7. Refill Frequency: timespan
  8. Refills Remaining: positive number

2.4.5 Medical Procedures

Medical procedures include things like chest X-rays, spinal taps, and surgeries for installing plates in broken bones. Stored in ^PROCRX.

  1. Patient: ID
  2. Procedure: ID
  3. Scheduled Date: date
  4. Comments: string

2.4.6 Medications

The concept of a medication. Stored in ^MED.

  1. Medication Name: string

2.4.7 Procedures

The concept of a procedure. Stored in ^PROC.

  1. Procedure Name: string

2.4.8 DONE Summary

The database is laid out as follows.

<database-layout-summary> =

@startuml
Patient --> PatientLink
class Patient {
        1. PatientLink: ID
        2. Name: string
        3. Birthdate: date
        4. Sex: string
        5. Appointment date: date
        6. Systolic BP: positive number
        7. Diastolic BP: positive number
        8. Respiration: positive number
        9. Temperature: positive number
}

PatientLink --> Note
PatientLink --> MedRx
PatientLink --> ProcRx

class PatientLink {
        1. [Note, ...]: IDs
        2. [MedRx, ...]: IDs
        3. [ProcRx, ...]: IDs
}

class Note {
        1. Text: string
        2. Author name: string
        3. Display to patient?: boolean
}

MedRx --> Patient
MedRx --> Medication

class MedRx {
        1. Patient: ID
        2. Medication: ID
        3. Start Instant: instant
        4. End Instant: instant
        5. Dosing equation, counting from start instant: string
        6. Dispense Quantity: positive number
        7. Refill Frequency: timespan
        8. Refills Remaining: positive number
}

ProcRx --> Patient
ProcRx --> Procedure

class ProcRx {
        1. Patient: ID
        2. Procedure: ID
        3. Scheduled Date: date
        4. Comments: string
}

class Medication {
        1. Name: string
}

class Procedure {
        1. Name: string
}
@enduml

Each database write is also mirrored in the audit global. This is used for tracking purposes as well as displaying changes, since the last sync, to the user.

When records are created or edit IDs are reserved, the node above that entry stores the entry's ID. This allows us to differentiate between creating the ID and storing data within the ID itself. Deletions are logged by prefixing the entry with "K" for the Mumps "kill" command. The optional line number field is only used in ^PatientLink records where, e.g., a user might order multiple prescriptions for the patient at once.

<audit-global-layout> =

digraph g {
    "^AUDIT" -> {"localAuditInstant-1" localAuditInstant "localAuditInstant+1"}
    localAuditInstant -> {"editInstant-1" editInstant "editInstant+1"}
    editInstant -> {"recordType-1" recordType[label="recordType: record"] "recordType+1"}
    recordType -> {"record-1" record[label="record: edit"] "record+1"}
    record -> {"edit-1" edit "edit+1"}
    edit -> {"field-1" field "field+1"}
    field -> "[optional line number]" -> value
}

audit-global-layout.png

All of the data globals have a similar layout, but also include an appointment ID, allowing data elements to be associated with specific appointments on particular days.

<data-global-layout> =

digraph g {
  "^PATIENT" -> {"patient-1" patient "patient+1"}
  patient -> { "edit-1" edit "edit+1" }
  edit -> {"field-1" field "field+1"}
  field -> {"editInstant-1" editInstant "editInstant+1"}
  editInstant -> value
}

2.5 DONE Save Data Functions

The save data functions allow users to save data, to create records and edits, and to kill records.

<save-data-header> =

1: SetData
2:   ; just utility functions.
3:   q
4: <<save-data-functions>>

2.5.1 Utility Functions

By default, now returns the current Unix instant (microseconds since UTC midnight on 1/1/1970), with system-dependent accuracy. However, if a parameter is passed in, it will return that parameter, allowing for relatively-transparent unit testing.

<now> =

1: now(time)
2:   ; Returns the current Unix epoch time or the passed in time.
3:   ; ==================================================================
4:   q:time time
5:   q $ZUT
6:   ;

Strip removes whitespace from the beginning and end of strings by extracting the characters (extract) between the whitespace characters (whitespace), stopping at the first non-whitespace character.

<strip> =

 1: strip(string)
 2:   ; Remove whitespace from ends of string.
 3:   ; ==================================================================
 4:   n whitespace,start,end
 5:   ;
 6:   s whitespace=" "_$C(9,10,13)
 7:   ;
 8:   f start=1:1:$L(string) q:'(whitespace[start)  ; (whitespace)
 9:   f end=$L(string):-1:1 q:'(whitespace[end)
10:   ;
11:   q $E(string,start,end)  ; (extract)
12:   ;

There are also convenience functions to refer to the six different types of record globals.

<setdata-convenience> =

 1:   ; ==================================================================
 2:   ; Convenenice functions to refer to owning globals.
 3: Records()
 4:   q $name(^RECORDS)
 5: Edits()
 6:   q $name(^EDITS)
 7: Med()
 8:   q $name(^MEDICATION)
 9: MedRx()
10:   q $name(^MEDRX)
11: Pat()
12:   q $name(^PATIENT)
13: PatLink()
14:   q $name(^PATIENTLINK)
15: Proc()
16:   q $name(^PROCEDURE)
17: ProcRx()
18:   q $name(^PROCRX)
19:   ;

2.5.2 Error Checking

Error checking functions follow.

  1. Bad Global

    Users saving data to globals:

    1. Must specify a global (bg-null, bg-global), and
    2. Must specify an extant global by name (bg-exist), and
    3. Must not save data to the ^AUDIT global, or any of its subnodes (bg-audit).

    <bad-global> =

    1: isGoodGlobal(global)
    2:   ; Is the global a well-formed data global?
    3:   ; ==================================================================
    4:   q:global="" 0          ; (bg-null)
    5:   q:$E(global,1)'="^" 0  ; (bg-global)
    6:   q:'$D(global) 0        ; (bg-exist)
    7:   q:(global["^AUDIT") 0  ; (bg-audit)
    8:   q 1
    9:   ;
    

2.5.3 Create Record

Reserve a new record ID of any type (NewRecord-create) and log it to the ^AUDIT global (NewRecord-audit), at the global level, before returning it (NewRecord-return).

<new-record> =

 1: NewRecord(global)
 2:   ; Reserve a new record ID.
 3:   ; ==================================================================
 4:   n editInstant,recordId
 5:   ;
 6:   q:'$$isGoodGlobal(global) 0
 7:   ;
 8:   s editInstant=$$now("")
 9:   s recordId=$$nextId^Cluster($$Records)  ; (NewRecord-create)
10:   ; (NewRecord-audit)
11:   s ^AUDIT(editInstant,editInstant,$TR(global,"^",""))=recordId
12:   ;
13:   q recordId                              ; (NewRecord-return)
14:   ;

This should be used like s patId=$$makeRecord($name(^PATIENT)).

2.5.4 New Edit

Create a new edit on a record. This could be an appointment for a patient, editing a prescription, or an administrative edit for the behind the scenes details of a medication.

The edit's instant and ID are stored in the ^AUDIT global, at the record level (audit-record), before the ID is returned. If the instant is not specified as a parameter, the current time is used (null-instant).

<new-edit> =

 1: NewEdit(global,recordId,editInstant)
 2:   ; Create a new edit in a record.
 3:   ; Optional: editInstant
 4:   ; ==================================================================
 5:   q:'$$isGoodGlobal(global) 0
 6:   ;
 7:   s:editInstant="" editInstant=$$now("")  ; (null-instant)
 8:   s editId=$$nextId^Cluster($$Edits)
 9:   ; (audit-record)
10:   s ^AUDIT(editInstant,editInstant,$TR(global,"^",""),recordId)=editId
11:   ;
12:   q editId
13:   ;

2.5.5 Save Record Field

Save and audit data set in a record's field. This requires all of the subnodes, except value and edit instant, to be specified. After the data are set locally, a background job is started to push the data into the cluster (cluster-submit).

To avoid data loss, we take care to avoid writing data into already existing nodes. If no data currently exists at the edit instant, that edit instant is used. Otherwise, the edit instant for that node is incremented until an unused instant is found (find-free-instant). Since edit instants have a microsecond resolution, are server-local, and are tied to a specific field on a record, conflicts are extremely rare.

This does not create issues when advancing the cluster's edit log and loading our own data back from the cluster. The cluster log loading process goes through a different code path that does not modify the edit instant. Therefore, any entries that we push to the cluster will be mirrored back down to the server precisely as they were sent, avoiding a potential infinite loop where cluster-loaded data gets shifted by a single microsecond before being mirrored back to the cluster.

<set-record> =

 1: SetRecord(global,recordId,editId,editInstant,field,value)
 2:   ; Set data in a record.
 3:   ; Optional: editInstant, value
 4:   ; ==================================================================
 5:   n newNode
 6:   ;
 7:   q:'$$isGoodGlobal(global) 0
 8:   q:(+recordId=0)!(+editId=0)!(field=0) 0
 9:   ;
10:   s:'editInstant editInstant=$$now("")
11:   ;
12:   ; Avoid overlapping data entries (find-free-instant)
13:   f editInstant=editInstant:1 d  q:newNode
14:   . ; lock the potential edit
15:   . L +@global@(recordId,editId,field,editInstant):0
16:   . ; if the edit could be locked, check to see if it exists.
17:   . ; if it couldn't be locked, somebody else is already using it
18:   . s:$T newNode='$D(@global@(recordId,editId,field,editInstant))
19:   . ; if the node doesn't yet exist, use it.
20:   . q:newNode
21:   . ; if the node already exists, unlock it and try the next.
22:   . L -@global@(recordId,editId,field,editInstant)
23:   ;
24:   s @global@(recordId,editId,field,editInstant)=value
25:   L- @global@(recordId,editId,field,editInstant)
26:   s ^AUDIT(editInstant,editInstant,$TR(global,"^",""),recordId,editId,field)=value
27:   ;
28:   j writeChange^Cluster($R)  ; (cluster-submit)
29:   ;
30:   q value
31:   ;
  1. TODO Mirror this Locking in AppendRecord, GetRecordList, SaveRecordList

2.5.6 Append to Record List

Patient Link records don't contain simple fields, and instead contain lists of records associated with a particular edit. This method pushes data on to the end of a record's list. Each list contains the current number of entries in the list as its value and is atomically incremented when the new entry is added ($I).

<append-record-list> =

 1: AppendRecord(global,recordId,editId,editInstant,field,value)
 2:   ; Add data to the end of a list in a record.
 3:   ; Optional: editInstant, value
 4:   ; ==================================================================
 5:   n entry
 6:   q:'$$isGoodGlobal(global) 0
 7:   q:(+recordId=0)!(+editId=0)!(field=0) 0
 8:   ;
 9:   s:'editInstant editInstant=$$now("")
10:   s entry=$I(@global@(recordId,editId,field))
11:   ;
12:   s @global@(recordId,editId,field,editInstant,entry)=value
13:   s ^AUDIT(editInstant,editInstant,$TR(global,"^",""),recordId,editId,field,entry)=value
14:   ;
15:   j writeChange^Cluster($R)
16:   ;
17:   q value
18:   ;

In a more complete implementation pop, insert-at-index, and remove-at-index would also be present.

2.5.7 DONE Get Record List

It's sometimes helpful to be able to return the entire list at once instead of operating element by element. This returns the edit instant node and the child nodes, and may be used to update the current values in the list before saving them again. To function as expected, the result parameter must be passed in by reference.

<get-record-list> =

 1: GetRecordList(global,recordId,editId,editInstant,field,result)
 2:   ; Get data from a record list, returned in the result array.
 3:   ; Optional: editInstant
 4:   ; Reference: result
 5:   ; ==================================================================
 6:   q:'$$isGoodGlobal(global) 0
 7:   q:(+recordId=0)!(+editId=0)!(field=0) 0
 8:   ;
 9:   s:'editInstant editInstant=$$now()
10:   m result=@global@(recordId,editId,field,editInstant)  ; (get-merge-array)
11:   ;
12:   q result
13:   ;

2.5.8 DONE Save Record List

This function saves data from an array into the specified global. Note the similarity between this function and SetRecord, which differ only in how data are returned. Unique to this function, subnodes are merged (get-merge-array) in from an input array. To function as expected, that array must be passed in by reference.

<set-record-list> =

 1: SetRecordList(global,recordId,editId,editInstant,field,newData)
 2:   ; Set data in a record list.
 3:   ; Optional: editInstant
 4:   ; Reference: newData
 5:   ; ==================================================================
 6:   q:'$$isGoodGlobal(global) 0
 7:   q:(+recordId=0)!(+editId=0)!(field=0) 0
 8:   ;
 9:   s:'editInstant editInstant=$$now()
10:   m @global@(recordId,editId,field,editInstant)=newData  ; (set-merge-array)
11:   ;
12:   q newData
13:   ;

2.5.9 Kill Record

Remove a record from the database. Log the removal in the audit trail at the global level. These kills are not yet synchronized to the cluster.

<kill-record> =

 1: killRecord(global,recordId)
 2:   ; Erase a record from the database.
 3:   ; The ID will not be reused later.
 4:   ; ==================================================================
 5:   q:'$$isGoodGlobal(global) 0
 6:   q:(+recordId=0) 0
 7:   ;
 8:   s editInstant=$$now()
 9:   s ^AUDIT(editInstant,editInstant,$TR(global,"^",""))="K "_recordId
10:   ;
11:   k @global@(recordId)
12:   ;
13:   q editInstant
14:   ;
  1. TODO Synchronize the kill to the cluster.

2.5.10 One-Time Data Model Setup

This prepares the database for use.

<data-model-setup> =

 1: setup()
 2:   ; Perform one-time database setup.
 3:   ; Create all the nodes needed for cluster synchronization.
 4:   ; ==================================================================
 5:   s ^RECORDS=0
 6:   s ^RECORDS("max")=0
 7:   s ^RECORDS("query-time")=0
 8:   s ^RECORDS("version")=0
 9:   ;
10:   s ^EDITS=0
11:   s ^EDITS("max")=0
12:   s ^EDITS("query-time")=0
13:   s ^EDITS("version")=0
14:   ;
15:   q
16:   ;

2.6 DONE Save Test Data

This can be scripted so it creates a lot of data, but it must be deterministic, so I'll need to pick a particular test time and date to work with. Additionally, I can assume that the globals don't exist before running this, so I know the record IDs without tracking them strictly.

<TestData> =

 1: TestData
 2:   ; utility functions
 3:   q
 4: setup()
 5:   n medRecs,procRecs
 6:   s $ETRAP="B"
 7:   d MakeData(.medRecs,.procRecs)
 8:   d Patient1(.medRecs,.procRecs)
 9:   q
10: <<create-test-data>>

2.6.1 Basic Records

First, we need some type records for medications and procedures that can be ordered for the patient. Each new record has one edit created, so we know the lengths of the ID and edit array pairs are the same.

<create-test-data> =

1: MakeData(medRecs,procRecs)
2:   ; Create new medication and procedure records.
3:   ; Reference: medRecs, procRecs
4:   ; ==================================================================
5:   s %=$$MakeRecNames($$Med^SetData,"acetaminophen,albuterol,ibuprofen,loratidine",.medRecs)
6:   s %=$$MakeRecNames($$Proc^SetData,"cbc,chest x-ray,lipid panel,rapid strep test",.procRecs)
7:   q
8:   ;

MakeRecNames creates new medication and procedure records (rec-new-ids), with names stored in field 1 (rec-save-name). It returns the created record and edit IDs the recIds parameter (rec-store-ids), which must be passed by reference (using the "." instead of the normal "&" character).

<create-test-data> =

 1: MakeRecNames(global,recNames,recIds)
 2:   ; Record creation helper function.
 3:   ; Reference: recIds
 4:   ; ==================================================================
 5:   n recId,recEdit,i
 6:   ;
 7:   ; iterate over the name list
 8:   f i=1:1:$L(recNames,",") d
 9:   . ;
10:   . ; reserve new record and edit IDs (rec-new-ids)
11:   . s recId=$$NewRecord^SetData(global)
12:   . s recEdit=$$NewEdit^SetData(global,recId,"")
13:   . ;
14:   . ; save those IDs in the result arrays (rec-store-ids)
15:   . s recIds(recId)=recEdit
16:   . ;
17:   . ; actually save the record's name data. (rec-save-name)
18:   . s %=$$SetRecord^SetData(global,recId,recEdit,"",1,$$strip^SetData($P(recNames,",",i)))
19:   ;
20:   q 1
21:   ;

2.6.2 Patient 1

Create the first test patient by:

  1. Creating a patient and opening a new edit in their record (new-pat1).
  2. Creating a patient link record for that patient (new-link1).
  3. Linking the patient with their link record (make-link1).
  4. Storing sample demographics and vitals data (new-demographics1).
  5. Creating sample records (new-orders1). Note the use of empty (null) parameters to avoid passing in by-reference variables.

<patient1-test-data> =

 1: Patient1(medRecs,procRecs)
 2:   ; Create the first patient and their link record.
 3:   ; Reference: medRecs, procRecs
 4:   ; ==================================================================
 5:   n patId,patEdit
 6:   ;
 7:   ; create a patient (new-pat1)
 8:   s patId=$$NewRecord^SetData($$Pat^SetData)
 9:   s patEdit=$$NewEdit^SetData($$Pat^SetData,patId,"")
10:   ;
11:   ; create a patientLink record (new-link1)
12:   s patLinkId=$$NewRecord^SetData($$PatLink^SetData)
13:   s patLinkEdit=$$NewEdit^SetData($$PatLink^SetData,patLinkId,"")
14:   ;
15:   ; link patient to their link record (make-link1)
16:   s %=$$SetRecord^SetData($$Pat^SetData,patId,patEdit,"",1,patLinkId)
17:   ;
18:   ; store demographics (new-demographics1)
19:   d Patient1Roomed(patId,patEdit)
20:   ;
21:   ; create orders (new-orders1)
22:   d Patient1Orders(patId,patEdit,patLinkId,patLinkEdit,$O(medRecs("")),$O(procRecs("")))
23:   ;
24:   q
25:   ;
  1. Patient Rooming Information

    Store a patient's basic rooming information, the sort of data a nurse would validate and collect from a patient when initially bringing them to the exam room.

    1. Demographics (demogs), and
    2. Vitals by adding each vital data element ID and its value to a list (vitals-list) and iterating over through that list (vitals-iter: f i=start:increment:end) starting from the first element ID (5, visit date), until the final element in the list (9, temperature).

    The first and last element IDs are selected with the order operator walters1997m,newman03:_mumps_docum, which returns the next element in the list, from the beginning forward ($O(vitals("")) = 5), or from the end, going backward, ($O(vitals(""),-1) = 9).

    <patient-vitals> =

     1: Patient1Roomed(patId,patEdit)
     2:   ; Fill in patient 1 vitals
     3:   ; ==================================================================
     4:   n vitals
     5:   ;
     6:   ; basic demographics (demogs)
     7:   ; 2. Name
     8:   s %=$$SetRecord^SetData($$Pat^SetData,patId,patEdit,"",2,"zzztest zzztest")
     9:   ; 3. Birthdate: 35 years ago
    10:   s %=$$SetRecord^SetData($$Pat^SetData,patId,patEdit,"",3,$H-(365.25*35/1))
    11:   ; 4. Sex: Female
    12:   s %=$$SetRecord^SetData($$Pat^SetData,patId,patEdit,"",4,"female")
    13:   ;
    14:   ; Vitals (vitals-list)
    15:   s vitals(5)=+$H              ; today's appointment:
    16:   s vitals(6)=120,vitals(7)=80 ; bp: 120/80
    17:   s vitals(8)=20               ; respiration: 20
    18:   s vitals(9)=37               ; temperature: 37
    19:   ;
    20:   ; Iteratively store the vitals (vitals-iter)
    21:   f i=$O(vitals("")):1:$O(vitals("",-1)) d
    22:   . s %=$$SetRecord^SetData($$Pat^SetData,patId,patEdit,"",i,vitals(i))
    23:   ;
    24:   q
    25:   ;
    
  2. Ordered Medications and Procedures

    Create prescriptions the patient (new-rx), including a chest X-ray and a prescription for Tylenol (acetaminophen).

    <patient1-orders> =

     1: Patient1Orders(patId,patEdit,patLinkId,patLinkEdit,medId,procId)
     2:   ; Create patient 1 orders
     3:   ; Reference: medRxId, medRxEditId, procRxId, procRxEditId
     4:   ; ==================================================================
     5:   n med,proc,day,us
     6:   ;
     7:   s day=24*60*60,us=1000000 ; seconds per day, microseconds per second
     8:   ;
     9:   ; new prescriptions (new-rx)
    10:   s medRxId=$$NewRecord^SetData($$MedRx^SetData)
    11:   s medRxEditId=$$NewEdit^SetData($$MedRx^SetData,medRxId,"")
    12:   s procRxId=$$NewRecord^SetData($$ProcRx^SetData)
    13:   s procRxEditId=$$NewEdit^SetData($$ProcRx^SetData,procRxId,"")
    14:   ;
    15:   ; Medication
    16:   s med(1)=patId
    17:   s med(2)=medId
    18:   s med(3)=1577880000*us  ; start: noon on January 1st, 2020, GMT
    19:   s med(4)=1580299200*us  ; stop: noon on January 29th, 2020, GMT
    20:   s med(5)=day*us         ; take once daily
    21:   s med(6)=28             ; dispense 28 pills
    22:   s med(7)=(7*day*4)*us   ; refill every 4 weeks
    23:   s med(8)=0              ; no refills remaining
    24:   f i=1:1:8 s %=$$SetRecord^SetData($$MedRx^SetData,medRxId,medRxEditId,"",i,med(i))
    25:   ;
    26:   ; Procedure details
    27:   s proc(1)=patId
    28:   s proc(2)=procId
    29:   s proc(3)=1577887500*us  ; scheduled: 2:05 PM on January 15th
    30:   s proc(4)="ASAP"         ; comment
    31:   f i=1:1:4 s %=$$SetRecord^SetData($$ProcRx^SetData,procRxId,procRxEditId,"",i,proc(i))
    32:   ;
    33:   ; link to link-record
    34:   s %=$$AppendRecord^SetData($$PatLink^SetData,patLinkId,patLinkEdit,"",2,medRxId)
    35:   s %=$$AppendRecord^SetData($$PatLink^SetData,patLinkId,patLinkEdit,"",3,procRxId)
    36:   ;
    37:   q
    38:   ;
    

2.7 DONE Move bin/reset-env to src/new-environment

It's just a find-and-replace, but it's one I need to remember to do.

2.8 DONE Synchronize Test Data Between Multiple Users and Changers

Recalling my CS-739, from last semester, I do actually need some externally synchronized server system to actually coordinate record and edit ID reservations. Further recalling CS-739, I don't have time to go through the two-month process of writing another one. Zookeeper seems like a well-reviewed and well-received system that shouldn't be too hard to add to my server.

On the Zookeeper side, I'll need to store two counters: one for record IDs, and one for edit IDs. Clients will request a list of available IDs for one or both types and Zookeeper will just send back a thousand of each (min new range, max new range).

On the Mumps side, I'll need to:

  1. Store the currently consumed ID (^RECORDS, ^EDITS).
  2. Store the maximum ID available (max).
  3. Store the last request time (query-time).
  4. Store the last response value (max).
  5. Store the last response version (version).
  6. If there are fewer than 10 records available and it's been more than 60 seconds since the previous request, call out to Zookeeper for each counter type and store the result back (min, max ranges).
  7. If zero records are available, hang until it's been more than 60 seconds and send out another request. Keep doing this until Zookeeper replies. In a production system, we'd probably use temporary IDs that would need to be manually reconciled later. But, I never expect to see more than one Mumps server up at a time, so it's less of a deal for me in this test.
  8. Store the last data version received from the cluster.

<cluster-names> =

 1: Cluster
 2:   ; just utility functions.
 3:   q
 4: <<cluster-convenience>>
 5: <<supported-cluster-types>>
 6: <<cluster-dispatch>>
 7: <<cluster-get>>
 8: <<cluster-set>>
 9: <<cluster-sync>>
10: <<cluster-setup>>
11: <<cluster-load-scalar>>
12: <<cluster-start>>

2.8.1 DONE ID Reservations

Each server reserves its own range of record and edit IDs and pulls from that list for any new record and edit. This would result in a much smaller number of times that we would even need to query the server cluster for possible new IDs. Each day, we'd make sure we had a reservation of 10% more than the maximum we used per day in the last week. If there was a run on IDs, we could then request new ones ad-hoc.

This implies that I'm keeping an index of edits (which I am, in the record) and that I only need to show the user edits that have been changed since the last update instant (regardless of the creation date of that edit, because I don't want to miss someone newly recording a med on an old visit).

What do we do about records and edits that come in from other servers? Doesn't really matter, last edit wins.

2.8.2 Cluster-Specific Query Dispatch

Supporting additional cluster types is straightforward and requires only adding query-string and output parsing methods named with the cluster's identifier and setting that identifier in the ^CLUSTER("type") node.

This query method builds the query (cluster-build), executes it (cluster-run), and extracts the data from the query's output (cluster-parse).

<cluster-dispatch> =

 1: query(clusterType,requestType,node,value,version)
 2:   ; Get node values from any supported cluster.
 3:   ; Unsupported cluster types will throw runtime errors.
 4:   ; Optional: clusterType
 5:   ; ==================================================================
 6:   n cmd,output,exitStatus
 7:   s:clusterType="" clusterType=^CLUSTER("type")
 8:   ;
 9:   ; build query (cluster-build)
10:   x "s cmd=$$query"_clusterType_"(requestType,node,.value,.version)"
11:   s output=0
12:   ;
13:   ; run the query (cluster-run)
14:   d runCmd(cmd,.output)
15:   ;
16:   ; parse query output (cluster-parse)
17:   x "s exitStatus=$$parse"_clusterType_"(.output,.value,.version)"
18:   q exitStatus
19:   ;

These are the currently supported cluster types.

<supported-cluster-types> =

1:   ; supported cluster types
2: Zookeeper()
3:   q "Zookeeper"
4:   ;

2.8.3 Get Cluster State

  1. DONE Get Cluster Node's Values

    Query the cluster for a node's value and return it, its version, and the query command's exit status. The exit status is the function's return value (exit-status), the other values are returned by reference. To function properly, requestType must be one of $$Get or $$Set while nodeName is $$Records or $$Edits. The value and version are included as part of set requests (value-specified) but are skipped by default. Additionally, all the output is redirected from stderr to stdout to simplify output parsing.

    <zookeeper-query> =

     1: queryZookeeper(requestType,nodeName,value,version)
     2:   ; Get current variable counts from the Zookeeper cluster.
     3:   ; - requestType is =$$Get= or =$$Set=
     4:   ; - nodeName is =$$Records= or =$$Edits=
     5:   ; Reference: value version
     6:   ; ==================================================================
     7:   n cmd
     8:   s cmd="/usr/share/zookeeper/bin/zkCli.sh -server "_^CLUSTER("server")_" "
     9:   s cmd=cmd_requestType_" /"_nodeName
    10:   ;
    11:   ; include value and version if specified (value-specified)
    12:   s cmd=cmd_$S((requestType=$$Set)!(requestType="create"):" "_value_" "_version,1:"")_" 2>&1; "
    13:   ;
    14:   ; echo exit status (exit-status)
    15:   s cmd=cmd_"echo ""$?"""
    16:   q cmd
    17:   ;
    
  2. DONE Run OS Command

    Run an operating system command passed in as a string.

    <run-os-command> =

     1: runCmd(cmd,output)
     2:   ; Run a command, returning the exit status and line-delmited output.
     3:   ; Reference: output
     4:   ; ==================================================================
     5:   n proc
     6:   s proc="myproc"
     7:   o proc:(command=cmd:readonly)::"PIPE"
     8:   u proc
     9:   f  r output(output):0 s:output(output)'="" %=$I(output) q:$zeof
    10:   c proc
    11:   u $p
    12:   q
    13:   ;
    
  3. DONE Parse Cluster Output

    Parse the current value, that value's current version, and the exit status out of the Zookeeper cluster's output. This is necessary because Zookeeper returns a lot of unnecessary data in its output like log-file errors and the node's creation time.

    output(0): log4j:ERROR setFile(null,true) call failed.
    output(1): java.io.FileNotFoundException: /var/log/zookeeper/zookeeper.log (Permission denied)
    output(2): 	at java.base/java.io.FileOutputStream.open0(Native Method)
    output(3): 	at java.base/java.io.FileOutputStream.open(FileOutputStream.java:298)
    output(4): 	at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:237)
    [...]
    output(29): WATCHER::
    output(30): WatchedEvent state:SyncConnected type:None path:null
    output(31): 1
    output(32): cZxid = 0x5
    output(33): ctime = Wed Mar 25 21:19:07 CDT 2020
    [...]
    

    <zookeeper-parse> =

     1: parseZookeeper(output,value,version)
     2:   ; Returns cluster's current value, value's version, and exit status.
     3:   ; Reference: value, version
     4:   ; ==================================================================
     5:   n line,firstValue,lineCount,exitStatus
     6:   s firstValue=1
     7:   ;
     8:   f lineCount=0:1:output s line=output(lineCount) d
     9:   . ; value immediately precedes cZxid
    10:   . i ($P(line," ",1)="cZxid") s value=output(lineCount-1)
    11:   . i $P(line," ",1)="dataVersion" s version=$P(line," ",3)
    12:   s exitStatus=output(output-1)
    13:   ;
    14:   q exitStatus
    15:   ;
    

2.8.4 Set Cluster State

  1. DONE Increment Cluster State

    The cluster's nodes should only be incremented when the server actually needs to request new data, like a new record or edit range. From the perspective of the local server, "increment" is somewhat of a misnomer as the range of available values will be set to the cluster's next set of available values. The cluster's current value is read in before being incremented and sent back to the cluster. It's used only if the cluster accepted the value (increment-success), if no other node made conflicting updates to the cluster at the same time.

    The system treats all errors equally and doesn't differentiate between malformed commands, communication errors, and outdated state. It will try to set a new node value up to ten times before giving up (increment-retry). If the system needs to retry setting data, it will wait for up to half-a-second between retries (increment-sleep), allowing ephemeral network issues and conflicting sets to timeout.

    <cluster-increment> =

     1: increment(node,value,version)
     2:   ; Set variable counts in the cluster.
     3:   ; References: value, version
     4:   ; ==================================================================
     5:   n i,max,setSuccess,oldVer
     6:   ;
     7:   s max=10,value="",setSuccess="",oldVer=""
     8:   ;
     9:   ; try to set a new value in the cluster 10 times (increment-retry)
    10:   f i=1:1:max d  q:setSuccess
    11:   . ;
    12:   . ; only try to set data if we could successfully query the server.
    13:   . i $$query("",$$Get,node,.value,.version)=0 d
    14:   . . ;
    15:   . . ; update state with the query response, specify a new value.
    16:   . . s oldVer=version
    17:   . . s value=value+1
    18:   . . s %=$$query("",$$Set,node,value,.version)=0
    19:   . . ;
    20:   . . ; if the version has increased, we've set data. (increment-success)
    21:   . . s setSuccess=(oldVer<version)
    22:   . ;
    23:   . ; sleep for 0 - 0.5 seconds between tries (increment-sleep)
    24:   . h:'setSuccess $R(6)*0.1
    25:   ;
    26:   q setSuccess
    27:   ;
    
  2. Get Next Record or Edit IDs

    The system tries to reduce the number of possible conflicting sets by scaling the cluster's values to define a usable record or edit range that may be consumed before the server queries the cluster again. For example, if the record-scalar is set to 100, and the cluster returns 38, then the server has records 3,800 - 3,899 reserved and is expected to consume at least 95% of that range before re-querying the server. If a scalar of 100 still results in too much cluster conflict, the scalar can be tuned to any larger number required to reduce the request rate. Alternatively, a single query should finish within several seconds, so the server could also start firing queries when there are only minutes of values remaining in the range.

    <cluster-next-id> =

     1: nextId(typeGlo)
     2:   ; Get the next ID of type, querying the cluster if necessary.
     3:   ; ==================================================================
     4:   n tried,id
     5:   s tried=0
     6:   ;
     7:   ; pause here to make sure IDs are available. (wait-for-ids)
     8:   f  q:'$$needQuery(typeGlo)  d
     9:   . ; sleep for a moment if we've tried and failed to query.
    10:   . h:(tried) $$timeToQuery(typeGlo)
    11:   . ; only make the request if it's still required after sleeping.
    12:   . j incrementJob(typeGlo)  ; (job-off-query)
    13:   . s tried=1
    14:   ;
    15:   ; don't check for ID availability, just lock.  it's astronomically
    16:   ; unlikely that we'd burn through all =^CLUSTER("low-water")= values
    17:   ; between here and when we checked, a statement ago.
    18:   ;
    19:   L +@typeGlo
    20:   s id=$I(@typeGlo)
    21:   L -@typeGlo
    22:   q id
    23:   ;
    
  3. Are New IDs Required?

    If the number of available records is below the low-water-mark, we'll need to query now.

    <cluster-need-query> =

     1: needQuery(typeGlo)
     2:   ; Is the number of available records below the low-water mark?
     3:   ; ==================================================================
     4:   n minVals,recordsLeft
     5:   ;
     6:   s minVals=0,recordsLeft=0
     7:   s minVals=^CLUSTER("low-water")
     8:   s recordsLeft=@typeGlo@("max")-@typeGlo
     9:   ;
    10:   q recordsLeft<minVals
    11:   ;
    
  4. Time Until Next Query

    We must wait at least ^CLUSTER("query-delay") (microseconds) between queries. Return the number of seconds the system must wait until it can query again, or 0 if the system can query immediately.

    <cluster-query-time> =

     1: timeToQuery(typeGlo)
     2:   ; How long should we wait before querying.
     3:   ; May return negative seconds which are ignored by hang.
     4:   ; ==================================================================
     5:   n minTime,sinceLastRequest,waitTime
     6:   ;
     7:   s minTime=^CLUSTER("query-delay")
     8:   s sinceLastRequest=0
     9:   ; only compare against previous query if there has been one.
    10:   ; otherwise, we'll wait unix-epoch seconds.
    11:   s:@typeGlo@("query-time") sinceLastRequest=($$now^SetData("")-@typeGlo@("query-time"))
    12:   s sinceLastRequest=minTime-sinceLastRequest
    13:   ;
    14:   q sinceLastRequest/1000000
    15:   ;
    

    A possible future enhancement would be to make timeToQuery a record-type-specific variable, such that each record type could have a separate delay time. Currently, there's no clear indication that rate-limiting particular ID types more than others would be useful.

  5. Server Query Background Job

    incrementJob contains the jobbed off (background) increment process that communicates with the cluster on a separate thread. This allows the main thread the option of performing other tasks. In an effort to constrain complexity in this project, I've decided against letting the server perform other tasks while waiting for new records to be allocated. However, in the interests of throughput, the server could certainly be adjusted to consume the entire set of low-water records before pausing for the cluster's response.

    The server is conservative about when it queries the cluster both because reaching across the network is relatively slow, and because other jobs (other server process or this process's siblings) might just have submitted this same request. The server makes sure both that it can get a lock on the global

    <cluster-increment-job> =

     1: incrementJob(typeGlo)
     2:   ; Interpret the cluster's results to save new record IDs.
     3:   ; ==================================================================
     4:   n scalar,value,version
     5:   ;
     6:   ; lock the global immediately before continuing.
     7:   L +@typeGlo:0
     8:   ;
     9:   ; quit if we couldn't lock the global or don't need to query anymore.
    10:   q:'$T
    11:   q:'$$needQuery(typeGlo)
    12:   ;
    13:   s scalar=^CLUSTER("scalar")
    14:   s value=0,version=0
    15:   ;
    16:   ; update state, if we could reserve new IDs on the server.
    17:   s @typeGlo@("query-time")=$$now^SetData("")
    18:   i $$increment($$GloToQuery(typeGlo),.value,.version) d
    19:   . s @typeGlo=value*scalar
    20:   . s @typeGlo@("version")=version
    21:   . s @typeGlo@("max")=(value+1)*scalar-1
    22:   . j nameReservation($$GloToQuery(typeGlo),value)
    23:   ;
    24:   ; since increment can retry, make sure query-time is current.
    25:   s @typeGlo@("query-time")=$$now^SetData("")
    26:   ;
    27:   ; unlock the global before quitting, too.
    28:   L -@typeGlo
    29:   ;
    30:   q
    31:   ;
    

    In order to keep track of which server makes which edits, reservations are named in the cluster.

    <name-reservation> =

    1: nameReservation(node,value)
    2:   ; Store that this node has reserved the ID range in the cluster.
    3:   ; ==================================================================
    4:   s %=$$query("","create",node_"/"_value,^CLUSTER("server"))
    5:   q
    6:   ;
    

    This data is unlikely to be useful except for audit purposes. Since the primary data store is the cluster, of which the server is simply a local cache, there's no need for servers to retain exclusive edit rights to (ownership of) specific records.

2.8.5 One-Time Cluster Setup

This configures the server so it can cooperate with the rest of the servers in the cluster. All servers in a single cluster must use equivalent configurations, changing only the server's name as necessary.

Several important variables are defined here:

Last Audit
Database changes are timestamped in the ^AUDIT global as they are applied. last-audit records the latest audit entry retrieved from the cluster. Note that this value is not server local: it's the cluster's list of all the edits recorded on the database. The server periodically requests updates from the cluster and will locally load every change since the last-audit through the current edit.
Low Water
The minimum number of records available before the server queries the cluster again. The server will not query the cluster until the remainder of the records have been consumed.
Query Delay
The minimum number of second between cluster queries. When multiple threads attempt to query at the same time, each will wait until the minimum query time has passed, and the first to get the relevant lock will query the cluster. The remaining threads will wait until the query is complete and they have valid IDs.
Server
The server's address.
Scalar
Replies from the cluster are scaled to create a range of values the server owns. Servers only need to query for values when the number of available values dips below the minimum. The server attempts to load the value from the cluster or sets the default to 10,000 values, if the cluster is unavailable (cluster-load-scalar).
Type
The type of cluster running. Must be set before querying the cluster.

<cluster-setup> =

 1: setup()
 2:   ; Perform one-time cluster setup.
 3:   ; ==================================================================
 4:   L +^CLUSTER:0
 5:   q:'$T
 6:   s $ETRAP="B"
 7:   s ^CLUSTER("last-edit")=0
 8:   s ^CLUSTER("load-delay")=60*1000*1000    ; 1m
 9:   s ^CLUSTER("query-delay")=0.1*1000*1000  ; 0.1s (query-delay)
10:   s ^CLUSTER("server")="localhost:2181"
11:   s ^CLUSTER("type")=$$Zookeeper
12:   s ^CLUSTER("scalar")=$$loadScalar()
13:   s ^CLUSTER("stop")=0
14:   s ^CLUSTER("low-water")=^CLUSTER("scalar")*0.05  ; (low-water)
15:   L -^CLUSTER
16:   q
17:   ;

Loading cluster data from the cluster is somewhat recursive and requires bootstrapping with some placeholder values.

<cluster-load-scalar> =

1: loadScalar()
2:   ; Load the cluster's scalar value.
3:   ; ==================================================================
4:   s ^CLUSTER("low-water")=500
5:   s %=$$query("","get","scalar",.value,.version)
6:   n value,version
7:   s:'value value=10000  ; (cluster-load-scalar)
8:   q value
9:   ;

Use the cluster by running jobs against it.

<cluster-start> =

 1: start()
 2:   ; Query the cluster for updates on occasion.
 3:   ; ==================================================================
 4:   s ^CLUSTER("stop")=0
 5:   j readChangesJob()
 6:   q
 7:   ;
 8: stop()
 9:   ; Stop querying the cluster.
10:   ; ==================================================================
11:   s ^CLUSTER("stop")=1
12:   q
13:   ;

2.8.6 Convenience and Utility Functions

<cluster-convenience> =

1:   ; other convenience functions
2: GloToQuery(global)
3:   ; global must be $na(^RECORDS) or $na(EDITS)
4:   q $$FUNC^%LCASE($P(global,"^",2))
5: Get()
6:   q "get"
7: Set()
8:   q "set"
9:   ;

2.8.7 DONE Read cluster scalar size after we read a new edit range.

Set scalar size when initializing the ZK cluster. Don't bother frobbing the scalar size during the run since 10,000 is big enough for my test purposes.

2.8.8 DONE Sync Changes

  1. DONE Write Edits Out

    ^AUDIT entries are written to the ZooKeeper cluster as they are made. This is called from SetRecord^SetData and AppendRecord^SetData.

    <writeChange> =

     1: writeChange(edit)
     2:   ; Set a record edit in the cluster.
     3:   ; edit must be some value in =$Q(^AUDIT)=.
     4:   ; ==================================================================
     5:   n auditValue,auditVersion,escaped,i
     6:   ;
     7:   s edit=edit_"="""_@edit_""""
     8:   s escaped=""
     9:   f i=1:1:$L(edit,"""") s escaped=escaped_$P(edit,"""",i)_"\"""
    10:   s escaped=""""_$E(escaped,1,$L(escaped)-2)_""""
    11:   s %=$$increment("audit",.auditValue,.auditVersion)
    12:   s %=$$query("","create","audit/"_auditValue,escaped,"")
    13:   ;
    14:   q
    15:   ;
    
  2. DONE Read Edits In

    Edits are periodically loaded onto the local server from the cluster by a job that queries the cluster for the current edit ID and loads all the edits that have occurred since the previously loaded edit. Yes, this does cause the server to load its own edits, but that doesn't matter since ^AUDIT can't contain conflicting edits. Every edit is both timestamped to the microsecond and has an ID assigned to a single server so conflicting edits could only occur if a single server tried to set a node to two different values at the same microsecond. This seems unlikely enough during testing that it has been ignored, though a production system may implement that sort of duplicate checking in SetRecord^SetData by checking for the existence of a node with $D before setting it.

    <readChangesWithLocks> =

     1: readChangesWithLocks(since)
     2:   ; Load database changes from the cluster.
     3:   ; Assumes lock: =^CLUSTER("last-edit")=
     4:   ; ==================================================================
     5:   n value,version,edits
     6:   ;
     7:   s:'since since=^CLUSTER("last-edit")
     8:   s %=$$increment("audit",.maxEdit,.version)
     9:   ;
    10:   ; piece out and load cluster data into the local server.
    11:   f edits=since:1:maxEdit d
    12:   . s value=0,version=0
    13:   . s %=$$query("","get","audit/"_edits,.value,.version)
    14:   . d:$E(value,1,7)="^AUDIT(" loadAudit(value)
    15:   ;
    16:   s ^CLUSTER("last-edit")=maxEdit
    17:   ;
    18:   q
    19:   ;
    

    Since the server is loading data from the cluster, including its own previous writes, the normal data loading method must be circumvented. Data is stored as writes on the cluster and ^AUDIT sets are executed directly from the cluster's data. We then read the last database reference to see how many nodes it contained and use that to determine how to load the data into the data globals.

    Nodes will have one of two forms: a six subnode form, and a seven subnode form. The six subnode form contains all the normal nodes: the instant the server became aware of the edit, the original instant the edit occurred on, the global, the record ID, the edit ID, and the field. However, the seven subnode form also contains an additional node, the field's location in the record list.

    ^AUDIT(1587440374000217,1587440374000217,"PATIENT",3020009,400009,2)="zzztest zzztest"
    
    
    ^AUDIT(1587440374891763,1587440374891763,"PATIENTLINK",3020010,400010,2,1)=3020011
    
    

    In the case of local edits, the first two nodes are always equal. However, when edits are loaded from the cluster, the server will replace the first timestamp with its own current timestamp (update-local-audit). This keeps the local chronology of the audit trail self-consistent: entries are ordered according to when the server first learned about them, not when they were originally applied to the patient's record on a remote server. When recording the edit into the actionable data globals, the original edit instant is used. This keeps the patient's record chronologically consistent between servers, as data appears chronologically in the patient's history on the instant it was entered.

    However, since local timestamps are recorded, enough contextual information is provided that complex determinations about event ordering can be made, like "this morning, we learned that last month's appointment was updated yesterday." This is useful in the case of an end-user reviewing the patient's chart: it allows the system to highlight new data (edits to old appointments) that arrived since the user began the chart review while displaying it in the historical context of the appointment it was recorded in.

    <loadAudit> =

     1: loadAudit(value)
     2:   ; Loads cluster edits into server globals.
     3:   ; Sets the first subnode to the current Unix epoch, to indicate when
     4:   ; the server was notified that the edit arrived.
     5:   ; ==================================================================
     6:   n params
     7:   ;
     8:   ; update local audit time
     9:   s params=$P(value,"(",2)
    10:   s $P(params,",",1)=$$now^SetData("")  ; (update-local-audit)
    11:   s $P(value,"(",2)=params
    12:   ;
    13:   ; replay ^AUDIT changes.
    14:   s @value
    15:   ;
    16:   ; I believe this is only possible because the ^AUDIT nodes may
    17:   ; never contain commas.
    18:   n numNodes
    19:   s numNodes=$L($R,",")
    20:   i numNodes=6 x "d loadEntry("_$TR($P(value,"(",2),")=",",")_")"
    21:   i numNodes=7 x "d loadMultiEntry("_$TR($P(value,"(",2),")=",",")_")"
    22:   ;
    23:   q
    24:   ;
    25: loadEntry(localInstant,editInstant,global,recordId,editId,field,value)
    26:   ; Loads cluster edits into non-list server globals.
    27:   ; Ignores: localInstant
    28:   ; ==================================================================
    29:   x "s ^"_global_"(recordId,editId,field,editInstant)=value"
    30:   q
    31:   ;
    32: loadMultiEntry(localInstant,editInstant,global,recordId,editId,field,line,value)
    33:   ; Loads cluster edits into list-type server globals (=^PatientLink=).
    34:   ; Ignores: localInstant
    35:   ; ==================================================================
    36:   x "s ^"_global_"(recordId,editId,field,editInstant,line)=value"
    37:   x "s %=$I(^"_global_"(recordId,editId,field))"
    38:   q
    39:   ;
    

    Lock ^CLUSTER before reading from the server.

    <readChanges> =

    1: readChanges(since)
    2:   ; Load database changes from the cluster.
    3:   ; ==================================================================
    4:   L +^CLUSTER("last-edit")
    5:   d readChangesWithLocks(since)
    6:   L -^CLUSTER("last-edit")
    7:   q
    8:   ;
    

    Query the cluster until the stop flag is set.

    <readChangesJob> =

    1: readChangesJob()
    2:   ; Continuously read edits from the cluster.
    3:   ; ==================================================================
    4:   f  q:^CLUSTER("stop")  h ^CLUSTER("load-delay") d readChanges("")
    5:   q
    6:   ;
    

2.9 CANCELLED Display UI of Changes

This should show all changes made since the patient's chart was opened by default, with a slider for the provider to back up to whatever set of changes they'd like to see for the patient, with stops at any of their previous visits.

We want to highlight both:

  • Changes since last visit.
  • Recent (concurrent) changes.

2.10 CANCELLED Get workflow timings

User-workflow-wise, I'll necessarily come out on top because multiple workflows can be completed concurrently instead of sequentially.

Use a single multi-specialty visit workflow for performance example because that's where the difference would be most noticeable. This method can also compete against separate office visits or multiple users documenting in a single visit at the same time, but we want to make certain to showcase the biggest differences. We may also want to try the "OP specialty (PT?) visit during IP admission" workflow, which is tricky because the patient has concurrent IP and OP visits at the same time.

This project may have a more limited application than I'd originally thought, but it should still result in increased user satisfaction.

2.10.1 CANCELLED Analyze Workflow Timings via Cogtool

2.10.2 Workflows

Workflows can be segmented into novice and expert categories. Note that the recorded timings are idealized timings as they required no interaction with the patient. It's rare for users interacting with the patient to stream-of-consciousness document, as there's usually a significant amount of back and forth, taking up to (VERIFY) time.

  1. CANCELLED Find verifiable numbers on the amount of time patient interaction takes during a visit.
  2. CANCELLED Find verifiable numbers on each of these workflows.
  3. CANCELLED Update Demographics, Medical History
    1. The patient's birth-date was typoed. They're actually three years younger than was recorded. Correct the patient's birth-date.

    Expect: ~30s, or 1 minute with warnings.

  4. CANCELLED Record Vitals

    Record these vitals:

    1. BP: 120 / 80
    2. Height: 170 cm
    3. Weight: 70 kg
    4. Respiration: 20

    Expect: ~30s, or 1 minute with warnings.

  5. CANCELLED Write Medication Prescription

    Write a prescription for Ibuprofen 200 mg oral tablets, taking two tablets, twice daily, starting today, for four weeks. 2 refills.

    Expect: ~1 minute.

  6. CANCELLED Write Lab Requisition

    Place a requisition for a x-ray with 2 views of the left wrist.

    Expect: ~1 minute.

  7. CANCELLED Review Changes

    Provider reviews nurse's concurrent changes.

    Expect: 30s.

2.10.3 Measurement

Measure 3 runs of each of the workflows.

2.10.4 Analysis

Average the three runs of each of the workflows.

Sequential workflow with quick synchronous hand-off.

<sequential> =

@startuml
caption Time (minutes)
concise "Nurse" as RN
concise "Doctor" as MD
scale 1 as 200 pixels

@0
RN is Vitals
MD is Waiting
@+1

RN is History
@+1

RN is "Hand Off"
MD is "Hand Off"
@+0.5

MD is Ordering
RN is Idle
@+4

MD is Idle
@enduml

Concurrent workflow with longer review time for asynchronous hand-off, and no reactions required from the review (all entered orders were still valid).

<concurrent> =

@startuml
caption Time (minutes)
concise "Nurse" as RN
concise "Doctor" as MD
scale 1 as 200 pixels

@RN
0 is Vitals
+1 is History
+1 is Idle

@MD
0 is Ordering
+4 is Review
+1 is Idle
@enduml

Difference: 30s, or 1/6 of total time.

2.11 DONE Build Disk Image on Push

I can build the image locally, but I haven't yet made it available with the published webpage. Propellor fails to publish for unclear reasons hess19:_propellor_docs,hess18:_uenkn_os,hess17:_high_bandwidth,hess15:_propel_disk_images,hess14:_propel_containers,hess17:_propel_arm_images,hess19:_propellor_docs.

2.11.1 DONE Add graphviz and plantuml images.

2.11.2 TODO Add LorikeeM fontlock

Lorikeem's a pain to set up correctly. I might use my own copy, though.

2.11.3 TODO Add org-ref

2.12 DONE Build Paper on Push

2.12.1 DONE Add graphviz and plantuml images.

2.12.2 TODO Add LorikeeM fontlock

2.12.3 TODO Add org-ref on export.

3 Distributing the Mumps Database and Eliminating User-Level Locks

In Mumps, the lock operation (L+) is server-local which encourages centralizing processing on a single production server within a data center. This leads to significant capital investment costs to purchase servers large enough to handle the massive processing requirement of both entering and analyzing medical data both on demand and proactively. By storing data in a non-shared data structure, it is possible to reduce infrastructure costs by allowing multiple inexpensive servers to perform the work of one costly server. It is also be possible to improve user workflow throughput by preventing users from locking one another out of records.

3.1 Introduction

Mumps is both a classic NoSQL database and a programming language. Data are stored in first-class hierarchical, sparse, trees ("globals") that allow for high write throughput, while programs are written in an assembly-language similar syntax that supports allows for run-time evaluation. For all its simplicity and power, however, Mumps was originally designed in the 1960s for a much more centralized and server-focused world than the cluster-focused approach commonly used today. As such, all locks in the system are server-local, precluding many simple distributed designs. This paper takes the stance that, assuming the volume of data to process and analyze is too large to be cost-effectively handled on a single production server, it should be possible to circumvent the standard Mumps lock model to reduce centralization and hardware costs.

This research has three goals. First, it aims to reduce lock contention between servers by removing the concept of local locking, thus eliminating the concept of server-specific record ownership. Once record locks are no longer owned by any specific server, the database may be distributed across multiple servers without conflict, effectively making it a cluster-distributed-database. Another side-effect of removing local locks is that multiple users may document on a single record at the same time, using a type of last-edit-wins conflict resolution. This will enable future work to replace locks with a transactional model.

3.2 Workflow

In order to provide an overview of the disparate parts of the system and how they work together, a workflow with example end-users is provided. Links in this section point to the project's annotated source code.

Mr. Steele has called Dr. Granite, asking for a refill on his allergy medications. On reviewing the patient's chart, she has decided to place a new medication prescription, because Mr. Steele's last prescription was over a year ago and he no longer has any refills remaining.

When Dr. Granite goes to place the prescription, the system will then create a new medication prescription record in the ^MEDRX global (database table) and associate any changes made in the doctor's session with a specific edit identifier. Since Mumps is a hierarchical key-value store, each change is recorded as a sub-node of the edit identifier. In this instance, the system has selected record ID 3150 and edit ID 29007.

Dr. Granite records a new prescription for 30 once-daily 10 milligram Claritin tablets, with 11 refills to last Mr. Steele a full year. She then asks a nurse to call the order in to Mr. Steele's pharmacy.

The system will record these details in particular record fields, like the dispense quantity (field 6), or the number of refills remaining (field 8). To make sure historical data about a record don't get lost, each value is also timestamped by an edit instant, the current Unix Epoch in microseconds. This is detailed in the Data Global Structure section.

; (record, edit, field, instant)=value
^MEDRX(315,2907,6,1588363791787797)=30

When the new prescription is recorded in the prescription record, it's also separately recorded in the ^PATIENTLINK global, which is used to associate other records with the patient's original record. This little bit of indirection allows many users to quickly link new records to the patient's record without ever locking the patient's record. Field 2 in the patient link record stores each of the prescriptions written for the patient. Since the patient link record allows multiple records of each type to be linked back to the patient, it includes another level in the data hierarchy, the entry, which stores one line for each concurrently-linked record.

At the same time the edits are recorded in the data global, they're also recorded in the ^AUDIT global, a chronological ledger or journal of every change made in the system, for later reporting and synchronization purposes. In order to enforce chronological order, the first two subnodes of ^AUDIT are timestamps. The first sub node is the local instant that the server first learned about the edit while the second subnode is the edit instant as recorded on the originating server. In the case of local edits, these values will be the same. This is detailed in the Audit section.

; (i1, i2, global, record, edit, field)=val
^AUDIT(..7797,..7797,"MEDRX",315,2907,6)=30

Once the data are saved to the ^AUDIT global, the changes are submitted to a background process that pushes them on to the distributed database, an Apache ZooKeeper cluster. This is detailed in the Cluster section.

/audit = 9
/audit/9="^AUDIT(..."MEDRX",315,2907,..."

The local server periodically reads changes in from the cluster, in the form of ^AUDIT nodes. When the nodes arrive, the first and second subnodes are equal (because the edit was synced from the server it was created on). The server then replaces the first subnode (the local instant) with its own local time before loading the edit into its own ^AUDIT and data globals. This allows the data globals to appear chronologically according to when edits were originally made while the ^AUDIT global is ordered according to when the server was first notified about the changes. This is detailed in the Audit section.

Dr. Granite moves to finalize her documentation and close the patient's chart. Before the chart saves and closes, however, she must review any warnings accompanying her new documentation as well as any other changes that were made to the Mr. Steele's chart since she opened it. It looks like her nurse, David, updated Mr. Steele's weight while on the phone. It looks like he's helpfully lost a few pounds, but is nothing that would affect his allergy medication. Dr. Granite makes a note to congratulate Mr. Steele on his exercise regimen when he comes in for his annual physical next month.

In the future, this project will apply the user's changes to the record as a transaction and stop to display any other changes recorded in the ^AUDIT global since the user opened the chart. The user will have the opportunity to review the changes, the originating user, and that user's contact information (in case clarification is needed), before accepting their own changes.

3.3 Design

3.3.1 Data Structure

The system is built from two primary pieces: the local database and the remote cluster.

  1. Data Global Structure

    In this system, data are stored in globals (equivalent to SQL tables) that contain five identifiers: record ID, edit ID, field ID, save instant, and value. IDs are not divided by record type, but a single list of available IDs is shared between each type of record. Edits follow the same shared ID paradigm. This was done to reduce the amount of state in the database and the amount of data that would need to be synced with the cluster. The meaning of an edit is use-case specific. A patient edit may be a specific appointment, while a medication edit might be an administrator loading a monthly medication pricing database update.

    data-global-layout.png

    Figure 2: Patient key-value datastore, where every value has its own edit instant.

  2. Cluster Structure

    In this system, the usual distributed database configuration is assumed: that a single ZooKeeper cluster, made up of several database nodes, serves requests for multiple application servers which each serve multiple users on any number of connected devices. Any node in the cluster may respond to read requests but, in order to preserve a single total order for events in the cluster, only the leader node may write to the cluster's state. A server trying requesting a write may contact any server in the cluster and will be redirected to the cluster's current leader. The changes submitted are the newly-created ^AUDIT node entries which are pushed into the next free entry under the cluster's own "/audit" node.

3.3.2 Database Contents

Two main types of data are stored in the database:

  1. Data Globals: Each data global stores one type of record data.
  2. Audit Global: A running transaction log or journal of all the changes made to the database, used for syncing to and from the cluster.
  1. Global List

    For this experiment, a custom data model was devised to hold test patient data. Seven data types were created to hold six discrete types of data, along with a linkage record (1-to-many relation table). Each of those data types is stored in a separate global with different fields.

    Patient
    Stored in the ^PATIENT global, patients each contain basic information about the patient, like the patient's ID, name, sex, and birth date. Appointment-specific information is also recorded, like the patient's vitals, including blood pressure, temperature, and respiration rate.
    Medication
    Stored in the ^MED global, medications each contain an identifying name. More complex and complete information like generic form, RxNorm drug identifier wiki:rxnorm, form-specific drug concentrations, or pharmacy availability, was not necessary to create a usefully complex data model.
    Prescription
    Stored in ^MEDRX, prescriptions each contain a link back to the patient's ID, as well as the medication ID that they represent. They also contain non-record-relation information, like prescription duration, and frequency and number of refills remaining.
    Lab, Procedure
    Stored in ^PROC, laboratory orders and procedures are, like medications, identified by name. These include procedures like a forearm x-ray or diagnostic laboratory analyses like a complete blood count ("CBC").
    Notes
    Stored in ^NOTES, medical notes are text-based documents users can write about patients during a visit. They are linked to the patient and may be hidden or made visible to the patient.
    Ordered Procedures
    Stored in ^PROCRX, ordered procedures are used to link a specific patient and procedure. They also contain information like the date the procedure is scheduled for and any comments the physician might want to note when requesting the procedure.
    Patient Link
    Stored in ^PATIENTLINK, this record links to lists of notes, prescriptions, and lab or procedure orders for a single patient. This allows, for example, multiple users to edit different prescriptions while another user was edits the patient's visit. In this case, users would only compete for a lock when adding or removing a prescription from the link record itself.

    database-layout-summary.png

    Figure 3: The experiment's data model and data linkages.

  2. Audit

    Core to this implementation is the ^AUDIT node that contains the local times an edit was received from the cluster and when the edit was originally made on the local server. This allows us to preserve the total order of both local and remotely created edits The first two nodes of the ^AUDIT global are timestamps: the local timestamp, when the record or edit ("change") was created or received, and the cluster-wide timestamp, when the edit was first created on its originating server before being uploaded to the cluster.

    In the case of local edits, the first two nodes are always equal. However, when edits are loaded from the cluster, the server will replace the first timestamp with its own current timestamp. This keeps the server-local chronology of the audit trail self-consistent: entries are ordered according to when the server first learned about them, not when they were originally applied to a record on a remote server. Enough contextual information is provided that complex determinations about event ordering can be made, like "this morning, we learned that last month's appointment was updated yesterday." This is useful in the case of an end-user reviewing the patient's chart: it allows the system to highlight new data (edits to old appointments) that arrived since the user began the chart review while displaying it in the historical context of the appointment it was recorded in. It also makes it possible to play back what information users on the system had at any point when making decisions.

    ^AUDIT has a slightly different format for local edits than it does for remote edits. Since record and edit creation times are non-actionable metadata, they aren't mirrored to the cluster to reduce the total cluster load. Thus, all the record or edit creation times stored in ^AUDIT are for locally created records or edits.

3.3.3 Eliminating Conflicting Edits

Methods used to prevent local and remote users from overwriting existing data in the database and thus eliminate the need for record-wide server locks, include:

  1. Making a high-resolution, server-unique, edit instant part of each value's address.
  2. Making a non-overlapping, server-unique, record and edit identifier part of each value's address.
  3. Keeping a permanent historical record of the local and remote edit times for each change.

With these three methods together, it is nearly impossible for either local or remote users to accidentally overwrite existing edits. The single outstanding case is addressed in the Future Work's Data Structure Improvements section.

  1. Global Structure: Removing Local Locks

    Locks are applied to records to prevent users from making conflicting edits to a record. Normally, when a user opens a record for editing, the system first tries to acquire an exclusive write-lock on that record. If a record contains a broad array of data so that there are many workflows and use cases where a user might want to lock a record, or if a record is in high demand, any lock may delay a large number of changes.

    Since locks are used to reduce the chance of conflicting edits, the obvious solution is to reduce the size of the window in which conflicts could occur. Thus, every value is addressed by an edit instant, and the database has no concept of ahistorical (single-entry, most-recent-value-only) data. This allows us to effectively treat the database itself as an append-only transaction log. Since edit instants have a microsecond resolution, are server-local, and are tied to a specific field on a record, rare conflicts may be resolved by incrementing the edit instant until an unused microsecond is found.

  2. Cluster-Synchronized Edit Identifiers: Removing Remote Locks

    Several other steps can be taken to eliminate the need for locks, even when the database is distributed across a cluster. The simplest approach is to give each server its own set of unique identifiers for both new records and edits within those records. This way, any change created on a server must have an ID within that server's range and won't accidentally be merged with any other server's records when it's pushed to the cluster. When a server has consumed 95% of its available change ID range, the server will query the cluster until the change range shortage is resolved. The server will also continue to allocate IDs until the entire range is consumed. If the entire range is consumed without a valid cluster reply, the server will hang until it receives a reply, as it can no longer safely allocate change IDs to processes.

    With these reserved ID ranges, no server needs to worry about accidentally conflicting with another server's IDs. Since these ID ranges are monotonically increasing values, new servers can be added to the cluster at any time. New servers will notice that they have no ID range reserved and query the cluster before creating any new changes.

    Taking guidance from Chubby burrows2006chubby, the cluster is able to modulate the rate at which servers request new change ID ranges. This is done by introducing a change scalar which determines the size of each server's change ID range. Currently, new ID range requests increment the cluster's change identifier by one, and the server then reads the cluster's scalar value to determine the new ID range: [ID * scalar, (ID+1) * scalar). The cluster may increase the scalar at any time to slow down the rate at which servers request new IDs.

3.3.4 Patient Safety Implications of Transactional and Lock-Free Design

Since this data sharing model could be used in safety-critical situations fry_schulte_2020,parmar_2016_elmiate,weant_bailey_baker_2014, it should be reviewed relative to what sort of harms could befall a patient in which situations.

In outpatient (family practice) situations, the data update analysis is straightforward. Assuming most documentation is completed while the patient is present or within 24 hours, it's unlikely that the patient would have interacted with a different server that would have made meaningful changes to the record while the local user is documenting. In that case, the most recent previous data is likely days or hours old and would have been loaded from the cluster minutes after it was initially documented. The patient's providers would have had all the relevant and updated information before they even started documenting on the patient.

The problem becomes harder for admitted inpatients where multiple providers on a care team can document on the patient at the same time. However, this issue is mitigated in two ways:

  1. Server-locality. Since the patient's care team of providers are often server-local, limited to the team working within a single hospital, no cross-server talk is generally required. In the case of consulting providers (at other locations, on other servers) those providers are generally limited to a consulting role, and consult with a local provider who makes final decisions and orders the actual treatment, again keeping the edits local. This server-locality allows the server to act transactionally and prevent the user from saving edits if other, unreviewed, data has been created for the patient since the user started.
  2. On-action warnings. Modern Electronic Medical Records (EMRs) contain a vast array of warnings that occur when users take actions. Some of the most urgent warnings are medication-administration warnings. When a nurse is preparing to administer a medication, that medication is validated in several ways, including against other medications it may interact with, and against medically recommended doses. If a medication is found to interact badly with another medication, one of the patients conditions, or is simply an unusually high or low dose (because the ordering user mistyped), the administering user will be warned before administration. Users would generally seek guidance for unanticipated or severe warnings, preventing them from unintentionally completing a potentially dangerous administration.

    A patient transitioning between servers is also unlikely to result in medication administrations being lost. Since outgoing syncing is immediate, the administration would be pulled down to the local server no more than two minutes after it was documented. It is unlikely that a patient would be discharged from one emergency department, arrive at another, be triaged, admitted, and prepped for the administration of medications in under 2 minutes. The more problematic workflow would be a patient who left against medical advice immediately after the administration, possibly distracting the administering user and preventing them from marking the medication administration as complete. However, the data would again sync inside of two minutes and be available for any future emergency department. A patient who arrived at an emergency department immediately after leaving another one would probably result in calls between the departments and certainly would be examined closely. In that situation, the EMR would have synced data quickly enough to flag end users that something fishy was going on.

Nonetheless, there two are cases where data may not be communicated. First, if the receiving site can't receive data from the EMR either because they don't have a compatible (or any) system, and second when one of the servers or the cluster is down. In that case, the information simply is not there to be made available to local users, though users expect it to be. Given this possibility, any robust analysis of the patient safety implications of a particular EMR on the local area should include basic availability metrics like system compatibility checks and uptime percentages.

3.3.5 Software Libraries

The following libraries were produced as part of this experiment and are available to users:

SetData
Provides functions to save data to local records, in response to user edits.
Cluster
Provides a convenient front-end for bidirectional cluster data movement.
  1. SetData

    The functions available in the SetData library are concerned with saving data to local records in response to user edits.

    NewRecord
    Reserves a new record ID from the server's shared list of record IDs. If the number of IDs remaining in the ID list is below the remaining ID threshold, the server will query the cluster through nextId^Cluster to get a new list of available IDs. When the server receives the new list of IDs, it will immediately replace the existing ID list with the new one, regardless of how many IDs remain. Thus, setting the low-water mark is a balancing act between responsiveness and wasted ID consumption. The new ID requests are made on a separate thread to avoid unresponsiveness, though the server currently waits until enough IDs are available before continuing.
    NewEdit
    Reserves a new edit ID from the server's shared list of edit IDs. Also queries the cluster for new edit IDs when necessary.
    SetRecord
    After a record and edit ID are reserved, the user may then save data to that record on that edit. This has the side effect of also sending the newly set data out to the cluster on a separate thread. For ease of use, SetRecord assumes that each field contains a single value.
    AppendRecord
    To handle ^PATIENTLINK records, which can hold multiple values per field, AppendRecord adds an additional level to the data hierarchy that stores the value's current location in the field's list. Since Mumps databases are sparse this can lead to surprising data storage. It would be possible, for example, to have a list with only a third entry.
    GetRecordList
    Return a whole list at once, for a particular field's edit instant.
    SetRecordList
    Set an entire list at once, for a particular field's edit instant.
  2. Cluster

    Several functions for interacting with the ZooKeeper cluster are available.

    Query
    Perform a query against a cluster. Currently, ZooKeeper clusters are the only supported cluster type. If getting data from the cluster, return the node's value and version. If setting data in the cluster, the user must include the most recent version ID for that node before the cluster will perform the set. The system will query the cluster no more often than the query delay, which is one of the settings available in the ^CLUSTER global.
    Increment
    Increment an already-existing node. The system queries the cluster at least twice: first to get the node's current value and version, and then again to increase the value by one. If no other sets have been performed on the node, then the version is still current and the set succeeds. However, if the node has been changed between the two requests, the local version is outdated and the process gets the node again. If the process can't increment the node in 10 tries, it fails. It is then up to the caller to retry the increment or throw an error.
    Start
    Starts a background job that reads in database changes from the cluster every minute. Since ZooKeeper supports 10,000 - 20,000 operations per second, this should be a very manageable load hunt2010zookeeper. Every database change pushed to the cluster is given a monotonically increasing ID. The background job uses that ID to keep track of the last dataabase change it loaded and then, once every minute, loads in all the new changes from the cluster. This does result in the server reloading the changes it previously sent via SetRecord.
    Stop
    Terminates the background job reading from the cluster. This function does not prevent the server from sending outstanding updates back to the cluster, in order to avoid stranding changes on the local server.

3.4 Performance

Table 1: Request completion time, in seconds, per mode.
  Concurrent Concurrent Sequential Sequential
Requests Dispatch (s) Complete (s) Dispatch (s) Complete (s)
100 0 110 188 0
200 1 216 378 0
300 1 326 567 0
500 3 545 943 0
1000 9 <error> 1883 0

Performance measurements were taken over a period of two days on the CloudLab Wisconsin cluster, using a c220g1 system, as detailed in Table 2. The throughput of two different cluster communication dispatch methods were tested, in which the server queued up 100, 200, 300, 500, and 1000 requests and dispatched them either concurrently, as fast as the server could edit records in the Mumps database, or sequentially, allowing one record to complete before moving on to the next. The dispatch method was controlled by either forking off the database update process or allowing it to run in the main thread.

These measurements showed that dispatching cluster update events asynchronously, on a new thread, is the best way to keep the cluster updated. Overall, the concurrent dispatch process takes 0.57 seconds per request on average, while the sequential process takes 0.99 seconds per request, or 73% longer. This suggests that a significant amount of the delay is caused by synchronous network communication instead of disk time or CPU processing. These results were highly unexpected because ZooKeeper is advertised as handling thousands of requests per second. Nonetheless, the concurrent dispatch method was able to saturate and disconnect the remote ZooKeeper test server simply by overwhelming it with unresolved open connections. In those cases, the server could only be recovered by rebooting. Further investigation showed that the JRE was unable to create new threads to handle the incoming connections because no RAM was available. It is not clear whether the errors were due to the server being overburdened or were caused by the steps taken to recover the overburdened server.

<request-time> =

library(ggplot2)
library(dplyr)

##theme_set(theme_grey(base_size = 16))
mydata <- read.csv("data/summary.csv", header = T, sep=",")
## requests;type;step;time

ggplot(data = mydata,
       mapping = aes(x = requests,
                     y = time,
                     color = type,
                     shape = step)) +
geom_point(size = 4) +
geom_line(
    data = filter(mydata, type == "concurrent", step == "resolved")) +
geom_line(
    data = filter(mydata, type == "sequential", step == "dispatch")) +

labs(title="Request Completion Time",
     color="Step:",
     shape="Type:") +
xlab("Requests") +
ylab("Time (s)")

request-time.png

Table 2: CloudLab University of Wisconsin c220g1 Hardware Configuration cloudlab:hardware
CPU Two Intel E5-2630 v3 8-core CPUs at 2.40 GHz (Haswell w/ EM64T)
RAM 128GB ECC Memory (8x 16 GB DDR4 1866 MHz dual rank RDIMMs)
Disks Two 1.2 TB 10K RPM 6G SAS SFF HDDs; One Intel DC S3500 480 GB 6G SATA SSDs
NICs Dual-port Intel X520-DA2 10Gb NIC (PCIe v3.0, 8 lanes); Onboard Intel i350 1Gb

3.5 Analysis

The data were analyzed with respect to both raw performance and hardware cost.

3.5.1 Change Performance

Further investigation into the performance properties of the system is warranted since, although the ZooKeeper cluster consumed over 35 GB of RAM during the run, one request per second is far below the expected rate of over 10,000 requests per second. These results were consistent with other testing, though, as even making 100 sequential requests to the ZooKeeper cluster directly from the command-line took 65 seconds. Given that every set request takes two queries (an extra request is required to load the current node version before setting it), then the system can only sequentially complete 50 Mumps data sets in 65 seconds. A different or custom cluster query system that replaces the ZooKeeper command line client might be required, as the cluster remained largely idle during the 30 minute 1000-sequential-connections test. Improving the throughput may also increase the number of concurrent requests that can be handled at once: the system emitted 100 - 300 Mumps changes per second, or an average of 400 cluster queries per second. Based on the 10,000 cluster queries per second target, ZooKeeper should be able to handle at least 5,000 Mumps changes per second.

While it is possible two different sets of cluster throttling settings could have come into play, this is unlikely for two reasons:

  1. ZooKeeper doesn't being throttling the incoming connections by default until at least 1000 connections are open. Most of the tests never opened that many connections.
  2. The ^CLUSTER global's own throttling settings were set to prevent communication with the cluster more than every 0.1 seconds. Individual requests took longer than that to resolve in both dispatch methods, and so were not rate-limited by the internal throttling settings.

Regardless, the performance results are clear, the concurrent dispatch model, with a maximum-number of concurrent connections, is return float("{:.2e}".format(ratio)) 1.73 times faster than the sequential model.

3.5.2 Per User Cost

The cost of recreating a c220g1, that supports at least 500 concurrent changes, is around $1,500, based on today's non-sale prices. If aggressive writes are assumed and that each user creates one record every few seconds while interacting with patients, then it can be inferred that, since creating prescription records require 9 changes, a small clinic with 50 concurrent users can be supported at a cost of $30/user. If the throughput can be improved to the expected values, 5,000 concurrent changes, then 500 users may be supported for $3 each. With write batching, as noted in Cluster Throughput Improvements, the cost per user may be able to continue to drop, precipitously.

The current limitation is the speed with which changes are committed to the database. Right now, one prescription record that contains 9 database changes would take nearly 10 seconds to commit, while it should take only a millisecond. If the performance issue can be resolved, this may be a viable option for organizations looking to save money while decentralizing their hardware.

3.6 Future Work

During the course of the project, several possible enhancements and optimizations were identified, but were unfortunately too late or large to implement in the available time.

3.6.1 Parallel Documentation

Next steps for this work include adding a UI for entering data and displaying potentially conflicting data to the user. In particular, the user must be shown data that was entered or received since the beginning of the user's session before they're allowed to save new data.

Performance measurements are expected to improve in office visit workflows where both nurses and physicians interact with the patient. The physician's section of the workflow (therapy planning and ordering, including writing prescriptions and lab requisitions) may now be started concurrently with the nurse's section (rooming, including documenting vitals, medical history, and therapy compliance). Since reviewing data takes less time than entering it, performing the workflows in parallel may result in a faster workflow, even when including additional time required for the physician to asynchronously receive the patient and review the nurse's documentation. Additional time would be required if reviewing the nurse's documentation invalidated the orders the physician was about to place.

sequential-workflows.png

Figure 5: Sequential workflow, where both users document on the same physical device.

concurrent-workflow.png

Figure 6: Parallel workflow, where physicians start charting before the nurse finishes with the patient.

3.6.2 Record Entry and Data Review UI

The trickiest part of this system will be making the UI fast and intuitive. The system must record the instant a user opened a patient's chart and push any updates for that patient it receives from the cluster (or other users) to that user before any decisions are finalized. This involves an 8 step process:

  1. User opens patient chart to begin documenting.
  2. System records that time as user's documentation start time.
  3. Additional information is recorded by other users on the patient's record.
  4. Additional information is received by the cluster on the patient's record.
  5. User attempts to finalize their documentation.
  6. System pushes the changes that occurred between now and the user's documentation start time to the user's session, indicating which data elements changed and the date they changed on.
  7. User reviews changes and either commits the changes or updates their own documentation (returning to step 2).
  8. User closes patient's chart.

Step 6 is the most complex piece and requires the most care. When the user has indicated that they want to finalize their documentation, an efficient UI may switch from a single-pane data-entry view to a two column data-entry and data-review display. New data entered in the visit would need to be timestamped and highlighted in the review column. This effectively creates a change log against which the user can compare their changes.

The system could also be more restrictive by momentarily locking or turning the user's edits into a transaction for steps 5 and 6, and rejecting the edits if any new ones had been applied during the process. This would prevent any other users from finalizing documentation at the same time and preventing near-concurrent edits. Alternatively, the system could make a local copy of the patient's record and then commit the users changes on top of that copy before asking them to merge their copy back into the data globals.

<step6-ui> =

+------------------------------------+
|        This Visit's Updates        |
|         ------------------         |
|                                    |
|        1 BP - 120 / 80             |
|      > 4 BP - 110 / 90             |
|                                    |
|      > 3 HR -  70                  |
+------------------------------------+
|             Edit Log               |
|            ----------              |
| 1 - 10 minutes ago (David, x6488). |
| 2 - You opened chart.              |
| 3 - 6 minutes ago (David, x6488).  |
| 4 - 3 minutes ago (David, x6488).  |
+------------------------------------+

step6-ui.png

Figure 7: Proposed data review UI elements.

3.6.3 Cluster Throughput Improvements

In order to minimize the number of writes made to the cluster, the outgoing cluster synchronization model should be changed from one that immediately writes every changed field to the cluster, to a batch model, similar to how cluster changes are ingested. The system should keep a timestamp of the last change synced to the cluster and periodically sync all the changes between then and the current instant to the cluster in a single batch of one or more messages. This change is no more inefficient than the current approach, because the system is already reading and writing its own edits from the cluster.

3.6.4 Data Structure Improvements

It may be wise to demote the edit identifier node in the non-audit globals to a lower position. Since edit identifiers are sequential only within servers and also work as server IDs, edit 59003 (at the beginning of the 59000-60000 range) may have been made months before edit 49871 (at the end of the (49000-50000 range). This is a highly unexpected outcome in a hierarchical and otherwise chronological data model.

The simplest solution is to demote the edit ID beneath the edit instant so that edits are correctly ordered. However, that mirrors the structure of the AUDIT global and hides the historical chronology of the data. A better solution may be to replace the edit ID with the edit's original date, and move the edit ID beneath that date. That would allow appointments to appear on the patient's record chronologically, while also keeping edits that affect the same date close together, regardless of their source server. Importantly, the standard Mumps date identifier (+$H) would need to be specially configured to return a UTC date fnis19:_gtm_manual.

Finally, there is one possible case that could result in overwriting existing data in a cluster, and the data model should be adjusted to handle that case. If two remote users on different remote servers make changes to a preexisting edit at the exact same microsecond, then the system will be unable to differentiate between those edits when they're synced back from the cluster. One would have a later local edit instant, but because the record ID, edit ID, and remote edit time were all the same, the two entries would overwrite one another. The obvious solution is to allow the remote edit loading process to reject already existing entries but the system relies on overwriting existing entries to advance the state loaded from the cluster without keeping a list of which entries are local. The simplest solution is to require each record edit to occur on a new, server-local, edit ID. This approach becomes much more reasonable if the edit ID is demoted beneath the edit's date, as mentioned above.

3.6.5 Cluster Audit Errors

If the above changes were made to the data hierarchy and storing rules, it would be entirely reasonable for the local server to log an error when ingesting cluster data that changes the value of already recorded data. While conflicting edits are incredibly unlikely, it is unwise to be unprepared for data corruption somewhere in pipeline.

3.6.6 Call Out to ZooKeeper through JNI

As noted above, switching query systems may help increase throughput. Implementing a new query system would require using the Mumps external call system fnis19:_gtm_manual to call into a C-library that uses the Java Native Interface (JNI) to call into ZooKeeper and return replies. This would also help simplify the query and parseOutput calls since much of the large amount of spurious output could be avoided.

3.7 Related Work

While work is regularly done to improve hardware utilization and throughput such as when Intel and Intersystems worked together in 2015 to improve database throughput by 60% in 2015 intel2015:epic_scalability, organizations involved with commercially available electronic medical record systems do not publicly publish studies about their private data models. Even the open source medical records, Gnu Health martin2016gnu, OpenEMR noll2011qualitative, OpenMRS wolfe2006openmrs, and VistA advani1999integrating do not appear to study how the format of the data model affects data sharing and user workflows. Much more public work has recently been done on the data models used to exchange data between different healthcare organizations kalra2006electronic, including FHIR ("fire") bender2013hl7,hl7:fhir401, the standard message structure through which organizations exchange data and USCDI uscdi:2020v1,epic:uscdi2020, the specific medical data elements organizations exchange. While those are important and necessary avenues of study for patient treatment, it seems likely that more research into how low-level data structures inform high-level user workflows is warranted.

3.8 Conclusion

Overall, eliminating Mumps database locks holds promise, but is not yet completely proven. It is possible to safely add a distributed ZooKeeper database to a Mumps database in a cooperative fashion to leverage the fast local access of the Mumps NoSQL database, while reducing hardware costs by distributing database writes across a cluster. However, before organizations could take advantage of the reduced hardware costs, further work would need to be done on improving the performance of the ZooKeeper database. Once that is resolved, the next step would be to implement a transactional user interface.

3.9 Reproducible Research

The sources for paper are embedded in this file and can also be downloaded or reproduced from the disk-image below by following the instructions in the README.

New performance data may be captured by running make src && make data from within the cs790-p1 folder. Additionally, the full set of experimental notes may be reviewed.

3.10 Mumps Quick Reference

This is a very quick introduction to the Mumps language and database, though more comprehensive guides exist walters1997m,wiki:MUMPS,newman03:_mumps_docum.

Mumps supports the normal flow control operations: If, Else, For, $S (select, case), Quit (return, break, continue), Goto, Hang (sleep), Halt (exit), Job (fork), and logical operators & (and) and ! (or). Evaluation order, unless parenthesized, is strictly left-to-right, even for arithmetic. Nearly every control operator accepts a conditional expression (:x) that determines whether the statement is evaluated, as an if shorthand. Void functions are called with Do, which can also be used to create a new stack-level, like C's curly-braces. Functions that return values are prefixed with $$ when called. Each operator may also be abbreviated with a single-character name for brevity, i for if, f for for, etc.

Mumps's does not enforce strict typing rules. Every value is implicitly a string, though any value may be coerced to a number by using it in arithmetic. Numbers are parsed from strings going left to right, stopping at the first non-numeric character (excluding "e", for scientific notation). Of special interest here is the special variable, horolog ($H), which is made up of the current number of days since December 31st, 1840, a comma, and then the number of seconds since midnight, today ($H = 65499,4425). The current date can thus be extracted from the horolog in a single statement, s today=+$H, or an empty ("null") or non-numeric variable can be trivially coerced to zero, as in s zero=+"".

Mumps uses stack-based dynamic scoping, with aliasing: every function exports its declared variable names to any called child functions. The New and Kill operators can be used to modify the current symbol table and redefine existing names. New creates a new symbol table at the current stack frame, if one does not already exist, and then creates an empty entry in that symbol table for the variable's name, replacing any existing entries. Kill removes the entry for the variable in the current stack frame. Future attempts to access that variable will raise an undefined variable error. Importantly, given the nature of dynamic scoping, new symbol tables are not created at each new stack-level, but only when explicitly created with New.

Mumps stores data in a hierarchical database where each root node is a hat-prefixed "named global," like "^Patients" or "^ORDERS". Subnodes are stored in a multi-dimensional comma-delimited array. Local variables may also have any number of subnodes but must be passed by reference, indicated with a preceding dot instead of the usual ampersand. For example, the "Address" global might have one entry for each home address, and be accessed through a comma-delimited notation.

<Address> =

set ^Addr("Street")=2
set ^Addr("Street",1)="2450 2nd Street"
set ^Addr("Street",2)="920 Ridge Street"

The data would be represented in the database like the following.

<address-layout> =

digraph g {
    "^Addr: 2" -> { "Street: 2" "City: 2" "State: 2" }
    "Street: 2" -> { "1: 2450 2nd Street" "2: 920 Ridge Street" }
    "City: 2" -> {"1: Philadelphia" "2: Chicago"}
    "State: 2" -> {"1: Pennsylvania" "2: Illinois"}
}

address-demo-2.png

Figure 8: Database-level Address Global Layout.

Finally, Mumps supports indirect execution through the eXecute (eval) statement, which interprets arbitrary code. Mumps also allows indirect access to globals via @notation. In the above example, if x = "^Addr" then @x would evaluate to 2. If y = "city" then, @x@(y,1) evaluates to "Philadelphia".

A listing for 99-bottles-of-beer follows.

<99-bottles> =

 1: bottles99()
 2:   new btls
 3:   set btls("max")=99
 4:   set btls("min")=0
 5:   for btls=btls("max"):-1:btls("min") do
 6:   . write:btls>0 $$multipleBtls(btls),!
 7:   . quit:btls>0
 8:   . write !,"No beer!"
 9:   quit
10: 
11: multipleBtls(bottleCount)
12:   new lyric
13:   set lyric=bottleCount_" "
14:   ; "1:" indicates the default case.
15:   set lyric=lyric_$S(bottleCount=1: \
16:     "bottle",1:"bottles")
17:   set lyric=lyric_" of beer on the wall."
18:   quit lyric

Many of the rules above can be bent or broken using more advanced features of Mumps, like $QUIT, which is true when the current code block was called as an extrinsic function (set x=$$f()), but false when called as an intrinsic procedure (do f()). This allows for magnificently inscrutable statements like QUIT:$QUIT $QUIT QUIT, which overloads the function's return value and always exits a function, but, returns true only when the caller expects a value.

bib/bib.bib

4 Dev Notes

4.1 [2020-01-16 Thu]

Possible Topics:

  1. Review the methods by which the GT.M Mumps (NoSQL) database (one used at work) synchronizes shards and locks across the Production data-set. Identify possible improvements.
  2. Identify methods to ensure randomness on headless servers from the first cold boot (making every VM a one-shot VM since it stores a private key, or requiring a connection to a remote private key generation server).
  3. A survey of the ethical issues behind the database of ruin (which was at once point available as a torrent, after someone collated personally identifiable information from every major database dump in the last several years, possibly this one?).
  4. Determine the level of reidentifiability in aggregated healthcare data using data-sets from Google's Nightingale, or Amazon's NIH data-sets (similar to the one done for Flickr/Twitter users). Lack of database access will likely prevent this research, as well as lack of IRB approval.

4.2 [2020-01-27 Mon]

And it's next week, when the 1 - 2p summaries of topics 1 and 2 are due. Time to finish them.

4.3 [2020-02-08 Sat]

Starting project 1.

4.4 [2020-02-22 Sat]

How the heck do you copy files around as part of creating a Propellor image? This doesn't work the way I'd expect it to.

<utilities-arent-properties> =

localhost :: Host
localhost = host "localhost" $ props
  & imageBuilt (RawDiskImage "/tmp/cs790-p1.img") c MSDOS
  where
    c d = Chroot.debootstrapped mempty d $
        & Utilities.createDirectory "/home/science/.propellor"
        & Utilities.copyFile "/home/science/cs790-p1" "/home/science/.propellor/config.hs"
        & Apt.installed [ "fis-gtm" ]

I'll leave it be for now, it's soaking more time than I have to give it. But, we're one cp ~/cs790-p1/src/config.hs ~/.propellor/ from transparently configuring the image.

4.5 [2020-02-23 Sun]

Never mind why it's not exporting to PDF, I can't explain why that fails, and exporting the output doesn't succeed.

The data model is simple but sufficient to create merge conflicts. I don't even constrain the data elements to specific scheduled visits. This may need additional help before I can even build a merge model on top of it.

4.6 [2020-02-25 Tue]

Time to start creating some data for the merge test.

Turns out I was only able to start creating the setting functions.

4.7 [2020-02-26 Wed]

Save data functions are done! They're probably broken, but I'm sure they'll work fine.

4.8 [2020-03-01 Sun]

Redoing the introduction to be more technically accurate. Also using a modified LorikeeM to get Mumps font lock.

The Medication records are complete, at least. It also took me a while to work out the syntax for the self-contained section reference, which I swear was simpler before and didn't require specifying both noweb and noweb-ref. Whatever, it works now.

4.9 [2020-03-02 Mon]

Add documentation comments and some more basic test data.

4.10 [2020-03-09 Mon]

Eventually, I was able to get propellor to run. However, it was unable to build a debootstrap image because it couldn't chroot. The answer appears to be docker-in-docker to run privileged docker on unprivileged docker so I can chroot can finally mount /proc within the image correctly. Or, I just try to use fakechroot (fakechroot fakeroot debootstrap --variant=fakechroot), which propellor doesn't seem to support anywhere in the code. If it was anywhere, it would be in chroot.hs, but no dice. It does have debootstrap::useEmulation, but that just determines whether it runs debootstrap or qemu-debootstrap, and not what I'm looking for. In fact, Chroot.hs::231 suggests that mounting /proc is unavoidable.

/proc needs to be mounted in the chroot for the linker to use
/proc/self/exe which is necessary for some commands to work

So, it's Building Docker images with GitLab CI/CD for me, then. Possibly with a dash of quickstart.

4.11 [2020-03-10 Tue]

I need gitlab-runner, but neither gitlab nor Debian have runner for buster. Shucks. How will that work?

I still need to fix the partitions-are-wrong-sizes issue, too.

4.12 [2020-03-15 Sun]

I'll just use git-lfs to manage the built disk images rather than trying to mess with building them on push.

Since it's still being wacky about the file system size, though, I'll need to mount the image locally first. Instead of mount --options=loop,offset=4194304 cs790-p1.img ./aMountDir, I'll need to use losetup to get the image attached to a block device.

Attach:

losetup --offset 4194304 --show -f cs790-p1.img

Detach:

losetup -d /dev/loop0

gparted shows me I was missing e2fsck -f -y -v -C 0 before the parted resizepart 1 100% and resize2fs.

dd if=/dev/zero bs=16M count=16 >> cs790-p1.img
parted cs790-p1.img resizepart 1 100%
losetup --offset 4194304 --show -f cs790-p1.img
e2fsck -f -y -v -C 0 /dev/loop0
resize2fs /dev/loop0
losetup -d /dev/loop0

But, I can circumvent all that by adding an extra 1 - 2 GB to the image. I still don't know how we consume 500MB after starting the image the first time.

Further, I've avoided needing to use the GitLab infrastructure by building the images myself and just using git lfs to host them. I doubt I can use that through hg-git though. I have submitted this patch to propellor to make Git.pulled work, though. Unfortunately, that means this can only build in Propellor >5.10.1.

--- Git.hs      2020-03-15 18:46:56.110523997 -0500
+++ Git.hs      2020-03-15 18:44:23.136891329 -0500
@@ -107,10 +107,12 @@
   where
         desc = "git pulled " ++ url ++ " to " ++ dir
         go = userScriptProperty owner
-                [ "cd " ++ shellEscape dir
-                , "git pull"
+                 (catMaybes checkoutcmds)
+                 `changesFileContent` (dir </> ".git" </> "FETCH_HEAD")
+        checkoutcmds =
+               [ Just $ "cd " ++ shellEscape dir
+               , ("git pull " ++) <$> mbranch
                 ]
-                `changesFileContent` (dir </> ".git" </> "FETCH_HEAD")

 isGitDir :: FilePath -> IO Bool
 isGitDir dir = isNothing <$> catchMaybeIO (readProcess "git" ["rev-parse", "--resolve-git-dir", dir])
[2020-03-15 Sun 17:11]
While that's building, I'll just go back to making the test data that should've been complete last week.

4.13 [2020-03-16 Mon]

Setting up the server to allow the current user to login as localhost.

<root-localhost-ssh> =

yes "" | ssh-keygen
sudo "mkdir /root/.ssh"
homedir=~; sudo sh -c "cat $homedir/.ssh/id_rsa.pub >> /root/.ssh/authorized-keys"
[2020-03-16 Mon 21:42]
And the image now builds.

4.14 [2020-03-17 Tue]

The thing is that we're still locking with `$I`, which won't actually distribute locks when I spin up multiple nodes. We still need some sort of distributed lock system, like ZooKeeper. I can just put a ZooKeeper on every server and query it to get new record and edit ids.

4.15 [2020-03-22 Sun]

Finishing the initial data setup today. Also, Joey reasonably rejected my patch because it relied on the image's git repository getting into a detached-head state, which shouldn't happen to most people. It probably happened to me because I was futzing around in the permanent chroot-image folder.

Other notes I made throughout the week.

4.15.1 DONE Performance Comparison

(Moved into relevant todo section.)

4.15.2 DONE Edit ID Storage Location

(Done in code to make ID reservations easier in the future.)

4.15.3 DONE Record and Edit Reservations

(Moved into relevant todo section.)

4.15.4 Appointment ID and Date

4.16 [2020-03-23 Mon]

Started researching Zookeeper use. Found a Simple Watch Client and evidence of monotonic counters, which are really all I need to track keys unique to servers. You can side-step this by setting unique prefixes for nodes, of course, but I want to do this while keeping the servers in concert.

4.17 [2020-03-25 Wed]

Zookeeper! I have an example! That example is good enough for my needs…?

<zk-cli> =

export ZKPATH="/usr/share/zookeeper/bin"
sudo $ZKPATH/zkServer.sh start
$ZKPATH/zkCli.sh -server localhost:2181 get /records

The ZK bug list mentions setData is atomic, and that's accessible to the client through the get and set dataVersion field: just read the dataVersion field with get and pass the version into the set. Then, bam! Atomicity.

<zk-atomicity> =

export ZKPATH="/usr/share/zookeeper/bin"
$ZKPATH/zkCli.sh -server localhost:2181 get /records

2 cZxid = 0x5 ctime = Wed Mar 25 21:19:07 CDT 2020 mZxid = 0x16 mtime = Wed Mar 25 22:23:02 CDT 2020 pZxid = 0x5 cversion = 0 dataVersion = 2 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 1 numChildren = 0

<zk-bad-set> =

export ZKPATH="/usr/share/zookeeper/bin"
$ZKPATH/zkCli.sh -server localhost:2181 set /records 2 1

version No is not valid : /records

<zk-server-stop> =

sudo $ZKPATH/zkServer.sh stop

ZooKeeper JMX enabled by default Using config: /etc/zookeeper/conf/zoo.cfg Stopping zookeeper … STOPPED

4.18 [2020-03-26 Thu]

<zookeeper-output> =

/usr/share/zookeeper/bin/zkCli.sh -server localhost:2181 get /records 2>&1 | egrep "(^[0-9]+|^dataVersion)"

The output-read example is derived from pages 357-360 of fnis19:_gtm_manual:

<mumps-read-output> =

set p="MyProcs"
open p:(command="ps -ef|grep $USER":readonly)::"PIPE"
u p
f  r x($I(x)):0 q:$zeof
close p
u $p
f i=1:1:x w !,x(i)

4.19 [2020-03-31 Tue]

Added an untested Zookeeper library that should work.

4.20 [2020-04-02 Thu]

Remove paper auto compile for now, until I make it work again.

Testing that zk lib… This is what the output looks like:

cmd: /usr/share/zookeeper/bin/zkCli.sh -server localhost:2181 get /records 2>&1; echo "$?"
output: 44
output(0): log4j:ERROR setFile(null,true) call failed.
output(1): java.io.FileNotFoundException: /var/log/zookeeper/zookeeper.log (Permission denied)
output(2): 	at java.base/java.io.FileOutputStream.open0(Native Method)
output(3): 	at java.base/java.io.FileOutputStream.open(FileOutputStream.java:298)
output(4): 	at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:237)
output(5): 	at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:158)
output(6): 	at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
output(7): 	at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
output(8): 	at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
output(9): 	at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
output(10): 	at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
output(11): 	at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
output(12): 	at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
output(13): 	at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
output(14): 	at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
output(15): 	at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
output(16): 	at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
output(17): 	at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
output(18): 	at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
output(19): 	at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
output(20): 	at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
output(21): 	at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
output(22): 	at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
output(23): 	at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
output(24): 	at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
output(25): 	at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
output(26): 	at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
output(27): 	at org.apache.zookeeper.ZooKeeperMain.<clinit>(ZooKeeperMain.java:53)
output(28): Connecting to localhost:2181
output(29): WATCHER::
output(30): WatchedEvent state:SyncConnected type:None path:null
output(31): 1
output(32): cZxid = 0x5
output(33): ctime = Wed Mar 25 21:19:07 CDT 2020
output(34): mZxid = 0xe8
output(35): mtime = Thu Apr 02 23:25:20 CDT 2020
output(36): pZxid = 0x5
output(37): cversion = 0
output(38): dataVersion = 8
output(39): aclVersion = 0
output(40): ephemeralOwner = 0x0
output(41): dataLength = 1
output(42): numChildren = 0
output(43): 0
output(44):

Of course, I needed to filter down the output because read:0 is super-aggressive and never blocks for anything, as it's designed to. Originally, I had:

f  r output($I(output)):0 q:$zeof

That's marvelously concise: try reading (r) from the pipe without blocking (0) into a new output subnode for each read (output($I(output))), until the end of stream (q:$zeof). That's, problematic because we're essentially writing a read-rate timer. Because we don't pause we're just logging empty input every iteration, reading over 500 empty reads lines for something as simple as echo 'hi'. Blocking would be simpler, but that prevents us from doing more complex flow control, like maximum read timers… I could add a one-second block, but that still leaves us in the same situation with a one-second resolution.

The simplest solution is to just filter out the empty reads, reusing the existing subnode until it contains data:

f  r output(output):0 s:output(output)'="" %=$I(output) q:$zeof

4.21 [2020-04-06 Mon]

4.22 [2020-04-07 Tue]

And things are all actually working with the data creation now.

I did assume the nodes existed on ZK and forgot to create them, though. I can probably assume they exist for the project? I'm mostly just running it on my local server, so that's probably fine, even though it kind of defeats the purpose. I'll do that if I have time. Doubt I will, since the report is due in 3 weeks.

<zk-make-nodes> =

$ZKPATH/zkCli.sh -server localhost:2181 create /edits 1

4.23 [2020-04-09 Thu]

Let's add some UI. Start with a basic menu.

<ui-test-welcome> =

Welcome to the Distributed Locks UI test.

In this system, you can act as different users making conflicting edits to a
patient's record.  You can then examine the patient's record as it appears from
the perspective of each user.  Data created since your session was started or
saved might conflict with the data you are entering and requires review.

Current Sessions (up to 7):

| ID | Start Time   | Last Saved | User     |
|----+--------------+------------+----------|
|  1 | 10:03 AM     | 10:20 AM   | Marcy    |
|  2 | 10:15 AM     | Never      | Dr. Jane |
|  3 | 10:16 AM     | Never      | Dr. Jane |
|  4 | 10:16 AM     | Never      | Dr. Jane |
|  5 | 10:18 AM     | Never      | Dr. Jane |
|  6 | 10:20 AM     | Never      | Dr. Jane |
|----+--------------+------------+----------|
|  8 | Add New Edit |            |          |
|  9 | Delete Edit  |            |          |
|  0 | Quit         |            |          |

Which session do you want to edit?

:

When we reach 7 sessions, the "add new" option is removed.

<ui-test-open-session> =

Session #3: Dr. Jane, started at 10:15 AM.  2 newer edits exist.


In this session, you have added:

- Vitals
- Medications

Do you want to:

1. Review this session's data.
2. Review conflicting data.
3. Add new session data.
4. Remove this session's data.
5. Back to session select.

:

So, if I standardize times required for each of the tasks (based on HMKY syntax), I can provide estimates for specific workflows and the time to review them. That's probably what I should be focusing on right now, rather than building a text UI, because that gives me publishable results instead of something do demo with. I've added workflows to measure that can be applied to and optimized by this system.

The disappointing part is how the performance measurements can't be taken directly on the host system, so the entire experiment isn't reproducible, so the entire disk image becomes much less meaningful, but I can at least record the data behind the performance measurements and make that available.

Use screen recordings or Cogtool to record the performance measurements.

4.24 [2020-04-12 Sun]

Started timing summary.

4.25 [2020-04-16 Thu]

Started writing performance section.

4.26 [2020-04-17 Fri]

To make the PDF export correctly, I need to walk through each of the "dot" and "plantuml" src blocks in the file and export those before exporting to LaTeX.

4.27 [2020-04-18 Sat]

4.28 [2020-04-19 Sun]

I'm unable to reference an actual EMR, so I don't have experimentally measured timings, unfortunately. I do have a reasonable best guess, however, and can cover a few different workflows.

I'm also canceling the UI section, generally, because as much fun as it would be to make a text UI, I don't quite have the time to do that. Perhaps I'll do that last as time allows.

While scalar is loading from the cluster, the whole system isn't starting up for some reason. I don't remember the stack investigation invocation. I think it's simply zwrite.

<stack-invocation> =

s $ETRAP="B"
zshow
zwr

But, in this case, we're just looping forever somewhere. So, breaking doesn't help. …But it's not actually looping or hanging, it just took a minute.

sudo groupadd mumps
sudo usermod -a -G mumps `whoami`
sudo chgrp mumps $gtm_dist

4.29 [2020-04-20 Mon]

Rebuilding a routine in the interpreter:

zlink "Cluster.m"

4.30 [2020-04-26 Sun]

Since org-mode doesn't seem to support setting toc:nil p:nil pri:nil tasks:nil ^:nil at a subtree level, I've configured it that way at the top-level to export the paper correctly, even though I want nearly the opposite for the HTML export.

Turns out I can't use tasks:nil at all, because that suppresses every heading with a TODO, instead of just suppressing the TODOs.

4.31 [2020-04-28 Tue]

Realized I didn't have any performance data, so I should collect some, at least throwing data at ZK to get a sense of the throughput with lots of little jobs demanding attention vs sequential jobs.

Of course, I can't really just drop the disk image onto the cloudlab servers because (A) it doesn't have any networking setup and (B) I don't have an easy way to get my SSH key onto the disk image. So, I'll just follow the propellor instructions. But the image's version of stack is too old, to work so I need to install a new version first. Exciting.

sudo apt-get install cabal-install
curl -sSL https://get.haskellstack.org/ | sh -s - -d ~/.local/bin
cabal update
propellor --init
sudo bash -c "cabal unpack propellor; cd ~/propellor*; stack install"
echo "A" | propellor --init

First, modify Debian (Stable "buster") to Buntish "xenial" . Then, remove "propellor" itself from the config.hs.

And here we come to a stopping point because the OS needs 188 more bytes of entropy to generate the GOG key for Propellor. haveged and wget -r -l 3 (first google result) seem like a good approaches.

ssh-keygen
sudo bash -c "cat /users/nickdaly/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys"
git clone https://gitlab.com/nickdaly/cs790-p1
cp ~/cs790-p1/src/config.hs ~/.propellor
propellor --spin localhost

And it still doesn't work. I'm giving up on it. Installing the packages manually, since that's all I really need here, is simple enough:

sudo apt-get install fis-gtm elpa-org emacs haveged make org-mode zookeeper

And of course elpa-org doesn't exist until bionic. But it's a replacement for org-mode, which does exist in xenial. Remind me not to pick disk images from 2016 for my experiments anymore.

Interestingly, the GDE utility, required to create a permanent data store, is missing from the Xenial version. It's in the Bionic version, though, so time to switch the image I'm running. This is an astonishingly long run of poor luck.

[2020-04-29 Wed 01:18]

Second try, this tiem with a c220g1, the lowest-powered server available. Interesting.

sudo apt-get install fis-gtm elpa-org emacs haveged make org-mode zookeeper

[2020-04-29 Wed 01:39]
And the environment is configured in 20 minutes.

So, the performance test is simple here:

sudo apt-get install fis-gtm elpa-org emacs haveged make org-mode zookeeper
sudo /usr/share/zookeeper/bin/zkServer.sh start

cd cs790-p1/
make src
logout
# login

cd cs790-p1/
make src

# actual test part.
killall mumps
netstat -pant | grep 2181 | wc -l

echo "Start!"
date -R
gtm/gtm-env
f i=1:1:1000 s %=$$SetRecord^SetData($$Pat^SetData,i,i,"",i,i)
h
date -R
echo "Requests dispatched!"
while [[ `pidof mumps` != "" ]]; do sleep 0.5; done
date -R
echo "Requests made!"
while [[ `netstat -pant 2>/dev/null | grep 2181 | wc -l` > 1 ]]; do sleep 0.5; done
date -R
echo "Connections closed!"

10000 crashes the server. Or at least renders it unresponsive.

echo "Start!"
date -R
gtm/gtm-env
f i=1:1:500 s %=$$SetRecord^SetData($$Pat^SetData,i,i,"",i,i)
h
date -R
echo "Requests dispatched!"
while [[ `pidof mumps` != "" ]]; do sleep 0.5; done
date -R
echo "Requests made!"

echo "Start!"
date -R
gtm/gtm-env
f i=1:1:1000 s %=$$SetRecord^SetData($$Pat^SetData,i,i,"",i,i)
h
date -R
echo "Requests dispatched!"
while [[ `pidof mumps` != "" ]]; do sleep 0.5; done
date -R
echo "Requests made!"

echo "Start!"
date -R
gtm/gtm-env
f i=1:1:2000 s %=$$SetRecord^SetData($$Pat^SetData,i,i,"",i,i)
h
date -R
echo "Requests dispatched!"
while [[ `pidof mumps` != "" ]]; do sleep 0.5; done
date -R
echo "Requests made!"

echo "Start!"
date -R
gtm/gtm-env
f i=1:1:3000 s %=$$SetRecord^SetData($$Pat^SetData,i,i,"",i,i)
h
date -R
echo "Requests dispatched!"
while [[ `pidof mumps` != "" ]]; do sleep 0.5; done
date -R
echo "Requests made!"

4.32 [2020-04-29 Wed]

So, the performance test from yesterday was more successful than expected (concurrent handled 500 simultaneous requests before bombing out at 1000). That's given me the courage to try even bigger ranges with the sequential set. Of course, the sequential set should be able to handle an infinite number of requests, because they're sequential, and only processing one at a time, but the results didn't seem to be scaling linearly, and I'd like to understand how or why.

src/perf-test 2>&1 | tee data/perftest.log

<perf-test> =

# runs performance tests

function perfTest {
    # runs performance tests, returns zero when successful.
    if [[ "$1" == "" ]]
    then
        return 1
    fi

    max=$1
    echo "`date -R`: Start: $max "
    echo "`date -R`: Dispatching requests: $max "
    echo 's $ETRAP="B" f i=1:1:'$max' s %=$$SetRecord^SetData($$Pat^SetData,i,i,"",i,i)' | gtm/gtm-env > /dev/null
    echo "`date -R`: Done dispatching requests."
    echo "`date -R`: Handling requests: $max "
    while [[ `pidof mumps` != "" ]]; do sleep 0.5; done
    echo "`date -R`: Done handling requests."
    echo "`date -R`: End: $max"
    echo "`date -R`: Sleeping for 1 minute to allow ports to close."
    sleep 60

    return 0
}

function killEverything {
    sudo killall mumps
    sudo /usr/share/zookeeper/bin/zkServer.sh stop
    sleep 5
    sudo killall java
}

make src
killEverything

echo "Connections killed."
while [[ `netstat -pant 2>/dev/null | grep 2181 | wc -l` > 1 ]]; do echo -n "`date -R`: Sleeping while `netstat -pant 2>/dev/null | grep 2181 | wc -l` ports close..."; sleep 5; echo "Done."; done

# start database
sudo /usr/share/zookeeper/bin/zkServer.sh start

# don't apply the query delay for performance testing.
echo 's ^CLUSTER("query-delay")=0' | gtm/gtm-env

# run tests!
for x in 100 200 300 500 1000 2000 3000 5000 10000
do
    perfTest $x
done

The cluster log file from tonight is the rest of the sequential data.

Also, this to see how many clients are connecting to the server. 1 is the server running.

sudo bash -c "netstat -pant | grep 2181 | wc -l"

Also, also, fix the fricken uninitialized variable error that's been around since I created the cluster function.

while [[ `netstat -pant 2>/dev/null | grep 2181 | wc -l` > 1 ]]; do echo "`netstat -pant 2>/dev/null | grep 2181 | wc -l` connections."; sleep 5; done

100 requests:

Start
Thu, 30 Apr 2020 00:34:01 -0500
Stop
Thu, 30 Apr 2020 00:35:06 -0500

4.33 [2020-04-30 Thu]

Performance data seems 100x as fast now for some reason. Likely that

Check if my session has died, from another session:

ps -u `whoami` | grep pts

We must've been running into a previous session's throttling, that's all I can think of.

Tell me how many processes are waiting:

while [[ True ]]; do echo -n "`date -R`: Sleeping while $((`pgrep -u \`whoami\` mumps | wc -l`)) processes finish..."; sleep 5; echo "Done."; done

Run the tests:

make src && make data

4.34 [2020-05-01 Fri]

Unfortunately, with the addition of the…

4.35 [2020-05-04 Mon]

Since Grub won't install anymore, let's try this: https://wiki.osdev.org/GRUB

sudo losetup /dev/loop0 cs790-p1.img
sudo losetup /dev/loop1 cs790-p1.img -o 4194304
sudo mount /dev/loop1 /mnt
sudo grub-install --root-directory=/mnt --no-floppy --modules="normal part_msdos ext2 multiboot" /dev/loop0
sudo umount /mnt
sudo losetup -d /dev/loop1
sudo losetup -d /dev/loop0

Still nothing. Maybe the chroot is borked? Yup, the boot directory is almost completely empty.

/boot/grub/unicode.pf2

That suggests I don't have a kernel. But I hadn't needed one specified before. Unless I did, previously, thought it useless, removed it, and then lost it after cleaning up the /srv directory. Which is what I think happened.

That's something good to remember about propellor: though the config file is stateless, the system it informs is not necessarily so.

4.36 [2020-05-05 Tue]

So, the screen reads "Waiting for /dev to be fully populated", then it breaks. What?

Turns out that virt-manager doesn't like QXL video on my machine. VGA mode works fine though.

5 References

<bib> =

<<bibliography>>

5.1 Tools

5.1.1 GTM Manual

The official (?) GT.M manual. Hosted on a website that goes down on occasion. Fortunately, it is archived on the Wayback Machine.

@Misc{fnis19:_gtm_manual,
  author =    {Fidelity National Information Services, Inc.},
  title =     "{GT.M Programmer's Guide}",
  howpublished = "\url{http://tinco.pair.com/bhaskar/gtm/doc/books/pg/UNIX_manual/pg_UNIX_screen.pdf}",
  month =     {December},
  year =      2019,
  day =       20
}

5.1.2 ZooKeeper Command Line Client

So, zkCli is published in a GitHub repository. But, that's different than the version incorporated into Apache Zookeeper, and made available to me through Debian's Buster repository. I'll just cite the most upstream and centralized reference: the ZK website. I should also try the GH-zkCli.

@Misc{zookeeper,
  author =    {Apache Software Foundation},
  title =     "{Apache ZooKeeper}",
  howpublished = "\url{https://zookeeper.apache.org/}",
  version =   {3.4.13},
  month =     {July},
  year =      {2018},
  day =       {15}
}

5.1.3 Propellor

Many propellor references were used, though they're more likely to appear in the development notes than in the final paper.

@Misc{hess19:_propellor_docs,
  author =    {Hess, Joey},
  title =     {propellor: property-based host configuration management in haskell},
  howpublished = "\url{https://hackage.haskell.org/package/propellor-5.6.0}",
  year =      2019,
  month =     {January}
}

@Misc{hess17:_propel_arm_images,
  author =    {Hess, Joey},
  title =     "{custom ARM disk image generation with propellor}",
  howpublished = "\url{https://joeyh.name/blog/entry/custom_ARM_disk_image_generation_with_propellor/}",
  month =     {November},
  year =      2017,
  day =       19
}

@Misc{hess14:_propel_containers,
  author =    {Hess, Joey},
  title =     {propelling containers},
  howpublished = "\url{https://joeyh.name/blog/entry/propelling_containers/}",
  month =     {November},
  year =      2014,
  day =       21
}

@Misc{hess15:_propel_disk_images,
  author =    {Hess, Joey},
  title =     {propelling disk images},
  howpublished = "\url{https://joeyh.name/blog/entry/propelling_disk_images/}",
  month =     {October},
  year =      2015,
  day =       22
}

@Misc{hess17:_high_bandwidth,
  author =    {Hess, Joey},
  title =     {high bandwidth propellor hacking},
  howpublished = "\url{https://joeyh.name/devblog/high_bandwidth_propellor_hacking/}",
  month =     {July},
  year =      2017,
  day =       05
}

@Misc{hess18:_uenkn_os,
  author =    {Hess, Joey},
  title =     {"Unknown host OS" after merging recent propellor},
  howpublished = "\url{https://propellor.branchable.com/forum/__34__Unknown_host_OS__34___after_merging_recent_propellor/}",
  month =     {January},
  year =      2018,
  day =       20
}

5.2 The Rest

@Misc{willis18:_lorik_mumps_devel_tools_gnu_emacs,
  author =    {Willis, John},
  title =     "{LorikeeM MUMPS Developer Tools for GNU Emacs}",
  howpublished = "\url{https://github.com/CoherentLogic/lorikeem/}",
  month =     {Febrary},
  year =      2018,
  day =       17
}

@Misc{newman03:_mumps_docum,
  author =    {Newman, Raymond Douglas},
  title =     "{MUMPS Documentation}",
  howpublished = "\url{http://mumps.sourceforge.net/docs.html}",
  year =      2003,
}

@book{walters1997m,
  title={M {Programming}: a {Comprehensive} {Guide}},
  author={Walters, Richard},
  year=1997,
  publisher={Digital Press},
  address={Boston}
}

@Misc{okane17:_mumps_introd,
  author =    {O'Kane, Kevin},
  title =     {Introduction to the {Mumps} {Language}},
  howpublished = "\url{https://www.cs.uni.edu/~okane/source/MUMPS-MDH/MumpsTutorial.pdf}",
  month =     {November},
  year =      2017,
  day =       4
}

@misc{wiki:MUMPS,
  author = "{Wikipedia contributors}",
  title = "{MUMPS --- Wikipedia, The Free Encyclopedia}",
  year = 2020,
  howpublished = "\url{https://en.wikipedia.org/w/index.php?title=MUMPS&oldid=949599996}",
  note = "[Online; accessed 12-April-2020]"
}

@inproceedings{hunt2010zookeeper,
  title="{ZooKeeper: Wait-free Coordination for Internet-scale Systems}",
  author={Hunt, Patrick and Konar, Mahadev and Junqueira, Flavio Paiva and Reed, Benjamin},
  booktitle={USENIX annual technical conference},
  volume=8,
  number=9,
  year=2010
}

@misc{wiki:rxnorm,
  author = "{Wikipedia contributors}",
  title = "{RxNorm --- Wikipedia, The Free Encyclopedia}",
  year = "2020",
  howpublished = "\url{https://en.wikipedia.org/w/index.php?title=RxNorm&oldid=941723876}",
  note = "[Online; accessed 26-April-2020]"
}

@misc{sverchkov14:_nosql_performance,
  author = "{Sverchkov, Sergey}",
  title = "{Evaluating NoSQL performance: Which database is right for your data?}",
  year = "2014",
  howpublished = "\url{https://jaxenter.com/evaluating-nosql-performance-which-database-is-right-for-your-data-107481.html}",
  note = "[Online; accessed 26-April-2020]"
}

@inproceedings{burrows2006chubby,
  title={The {Chubby} lock service for loosely-coupled distributed systems},
  author={Burrows, Mike},
  booktitle={Proceedings of the 7th symposium on Operating systems design and implementation},
  pages={335--350},
  year=2006
}

@Misc{cloudlab:hardware,
  author =    {CloudLab},
  title =     "{Hardware: CloudLab Wisconsin}",
  howpublished = "\url{https://docs.cloudlab.us/hardware.html#%28part._cloudlab-wisconsin%29}",
  month =     {February},
  year =      2020,
  day =       28,
  note =      "[Online; accessed 29-April-2020]",
}

@Misc{intel2015:epic_scalability,
  author =    {Intel},
  title =     "{InterSystems and VMware Increase Database Scalability for Epic EMR Workload by 60 Percent with Intel Xeon Processor E7 v3 Family}",
  howpublished = "\url{https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/epic-intersystems-vmware-paper.pdf}",
  month =     {April},
  day =       29,
  year =      2015,
  note =    {"[Online; accessed 22-April-2020]"}
}

@inproceedings{bender2013hl7,
  title="{HL7 FHIR: An Agile and RESTful approach to healthcare information exchange}",
  author={Bender, Duane and Sartipi, Kamran},
  booktitle={Proceedings of the 26th IEEE international symposium on computer-based medical systems},
  pages={326--331},
  year={2013},
  organization={IEEE}
}

@Misc{hl7:fhir401,
  author = {HL7},
  publisher = {HL7},
  title = "{FHIR v4.0.1}",
  year = {2019},
  howpublished = "\url{https://www.hl7.org/fhir/}"
}

@Misc{epic:uscdi2020,
  author = {Epic Systems},
  publisher = {Epic Systems},
  title = "{Epic USCDI on FHIR}",
  year = {2020},
  howpublished = "\url{https://uscdi.epic.com/}"
}

@Misc{uscdi:2020v1,
  author = "{The Office of the National Coordinator for Health Information Technology}",
  title = "{U.S. Core Data for Interoperability - 2020 Version 1}",
  year = "{2020}",
  month = "{February}",
  howpublished = "\url{https://www.healthit.gov/isa/united-states-core-data-interoperability-uscdi}",
}

@article{kalra2006electronic,
  author = "{Kalra, Dipak}",
  journal = "{Yearbook of medical informatics}",
  publisher = "{Georg Thieme Verlag KG}",
  title = "{Electronic health record standards}",
  year = "{2006}",
  volume = {15},
  number = {01},
  pages = {136--144},
  howpublished = "\url{https://www.thieme-connect.com/products/ejournals/pdf/10.1055/s-0038-1638463.pdf}",
}

@inproceedings{martin2016gnu,
  title={GNU Health: A Free/Libre Community-based Health Information System},
  author={Mart{\'\i}n, Luis Falc{\'o}n},
  booktitle={Proceedings of the 12th International Symposium on Open Collaboration Companion},
  pages={1--1},
  year={2016}
}

@inproceedings{noll2011qualitative,
  title={A qualitative study of open source software development: The open EMR project},
  author={Noll, John and Beecham, Sarah and Seichter, Dominik},
  booktitle={2011 International Symposium on Empirical Software Engineering and Measurement},
  pages={30--39},
  year={2011},
  organization={IEEE}
}

@inproceedings{wolfe2006openmrs,
  title={The OpenMRS system: collaborating toward an open source EMR for developing countries},
  author={Wolfe, Benjamin A and Mamlin, Burke W and Biondich, Paul G and Fraser, Hamish SF and Jazayeri, Darius and Allen, Christian and Miranda, Justin and Tierney, William M},
  booktitle={AMIA annual symposium proceedings},
  volume={2006},
  pages={1146},
  year={2006},
  organization={American Medical Informatics Association}
}

@article{goulet2007measuring,
  title={Measuring performance directly using the veterans health administration electronic medical record: a comparison with external peer review},
  author={Goulet, Joseph L and Erdos, Joseph and Kancir, Sue and Levin, Forrest L and Wright, Steven M and Daniels, Stanlie M and Nilan, Lynnette and Justice, Amy C},
  journal={Medical care},
  volume={45},
  number={1},
  pages={73},
  year={2007},
  publisher={NIH Public Access}
}

@inproceedings{advani1999integrating,
  title={Integrating a modern knowledge-based system architecture with a legacy VA database: the ATHENA and EON projects at Stanford.},
  author={Advani, Aneel and Tu, Samson and O'Connor, Martin and Coleman, Robert and Goldstein, Mary K and Musen, Mark},
  booktitle={Proceedings of the AMIA Symposium},
  pages={653},
  year={1999},
  organization={American Medical Informatics Association}
}

@misc{fry_schulte_2020,
  title="{Death by a Thousand Clicks: Where Electronic Health Records Went Wrong}",
  howpublished="\url{https://fortune.com/longform/medical-records/}",
  journal={Fortune},
  publisher={Fortune},
  author={Fry, Erika and Schulte, Fred},
  year={2020},
  month={Mar}
}

@misc{parmar_2016_elmiate,
  title={Why electronic records didn't eliminate medical errors},
  howpublished="\url{https://medcitynews.com/2016/03/ehr-eliminate-medical-errors/}",
  journal={MedCity News},
  author={Parmar, Arundhati and Baum, Stephanie and DeArment, Alaric and Dietsche, Erin and Truong, Kevin and Kaiser Health News},
  year={2016},
  month={Mar}
}

@misc{weant_bailey_baker_2014,
  title={Strategies for reducing medication errors in the emergency department},
  howpublished="\url{https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4753984/}",
  journal="{Open access emergency medicine : OAEM}",
  publisher={Dove Medical Press},
  author={Weant, Kyle A and Bailey, Abby M and Baker, Stephanie N},
  year={2014},
  month={Jul}
}

6 Meta

6.1 File Cache

Invalidate cached data if either the image's source has change or the image itself doesn't agree with the expected value.

<md5> =

md5sum $afile

6.2 Org-Mode Options

Org-Mode customization options, that really have no business being at the beginning of the file, are below.

Suppress all the metadata decorations on output.

Tangle everything and cache images.

Use several fancy LaTeX options.

Don't wrap inline source blocks with a new-line, and frame code blocks on LaTeX export.

Date: 2020-05-01 Fri 00:00

Author: Nick Daly

Email: ndaly@wisc.edu

Created: 2023-10-11 Wed 16:03

Validate