How To Subscribe to Data Series Created and Maintained by the JSOC-SDP

Background

Solar data at the JSOC are stored in a system that comprises two PostgreSQL databases, multiple file systems, and a tape back-up system. Related sets of data are grouped into data series, each, conceptually, a table of data where each row of data typically associated with an observation time, or a Carrington rotation. The columns contain metadata, such as the observation time, the ID of the camera used to acquire the data, the image rotation, etc. One column in this table contains an ID that refers to a set of binary-data files, typically a set of FITS-file images. The JSOC has written software that implements these conceptual tables by managing the two databases and the file systems. One database, the "DRMS database", contains the metadata of all the data series. The other database, the "SUMS database", contains information about the directories that contain the binary data for the data series. The software that manages the DRMS database is the Data Record Management System (DRMS), and the software/hardware that manages the data files is the Storage Unit Management System (SUMS). The entire system as a whole is known as DRMS.

A NetDRMS site is an institution with a local DRMS installation. It is any installation that does not generate the production data series (e.g., hmi.M_720s, aia.lev1) that Stanford generates for scientific use. A NetDRMS site can generate its own data. That site can create software that uses the DRMS library to generate its own data series. But it can also act as a "mirror" for individual data series. When acting as a mirror for a Stanford data series, the site downloads from Stanford the DRMS database information and stores it in its own DRMS database, and it downloads the SUMS files, and stores it in its own SUMS system. As the data files are downloaded to the local SUMS, the SUMS database is updated with the information needed to manage the files. It is possible for a NetDRMS site to mirror the DRMS data of any other NetDRMS site, but at this point, the only site whose data is currently mirrored is the Stanford JSOC.

Mirroring Data

In order for a NetDRMS site to mirror a data series generated at the JSOC, the data series must first by "published" at the JSOC. NetDRMS sites can mirror only data series that have been published at the JSOC. Then the mirroring site must "subscribe" to the data series. The subscription process is responsible for downloading the DRMS database information to the mirroring site. At the time of subscription, the existing DRMS database information is downloaded to the NetDRMS site and it is then ingested into the NetDRMS DRMS database. As changes are made to the data series at the JSOC, those changes are propagated to the mirroring site and ingested into the local DRMS database. JSOC developers implemented this publication/subscription feature by using Slony-I, PostgreSQL data-replication software (see Series Publication Notes), along with supporting scripts written at Stanford. Slony-I servers that run at the JSOC intercept changes to published data series and generate text files containing the SQL commands that update the DRMS database information at the mirroring site. The supporting scripts are responsible for dumping the database information at the time of subscription, and transferring it to, and ingesting it into, the NetDRMS site. The scripts are also responsible for downloading and ingesting the "update files" to the data series. The mirrored data series can be updated every couple of minutes so that the data series at the JSOC and the ones at the mirroring site are synchronized in near real time.

The updat files generated by Slony-I contain SQL. For example, if the JSOC creates a new record of data for a published data series, which entails inserting a row in a DRMS database table, an update file will contain an SQL INSERT statement that, if executed, would duplicate the insertion performed at the JSOC. NetDRMS sites that have a subscription to this data series receive the update file containing the SQL INSERT statement and they ingest it (they run a psql - the postgreSQL front-end - command that blindly runs all SQL commands in the update file). As a comment on security, these update text files can contain SQL UPDATE and DELETE statements in them as well. A perl script runs at the mirroring site that looks for these update files at the JSOC, and then downloads them and runs psql to execute the SQL commands contained within the files.

Subscribing to a Data Series

To subscribe to a data series, a privileged operator at the mirroring NetDRMS site runs the subscribe.py client script. This script, which can be run as many times as desired, allows the operator to subscribe to one or more series at a time. The first time it is run, the subscription server (running at the site running Slony-I) creates a directory for the site into which site-specific Slony-I update files will reside. These files will contain statements germane only to the set of series to which the site is subscribed.

In brief, the subscription server satisfies a client subscription request by sending to the client a file containing a SQL dump of the database information for the data series to which the client is subscribing. The file will also contain other set-up commands depending on the status of the client as a first-time subscriber or not. For example, if a client requests a subscription to hmi.M_45s, then the dump file will contain commands to create the hmi namespace (if it does not already exist), the hmi.m_45s table (it must not exist at subscription time), the appropriate entries in the hmi.drms_keyword, hmi.drms_segment, hmi.drms_link, and hmi.drms_series tables, and many insert commands. The insert commands will populate the hmi.m_45s table with a copy of all the records from the original corresponding table in the server database at the time of subscription.

After the initial subscription completes (subscribe.py has completed successfully), the client has a static snapshot of the series at the time the command was issued. Changes to the original series on the server will not automatically be applied to the client's database. However, the server will regularly generate site-specific Slony-I update files for the client. These update files contain INSERT (and UPDATE and DELETE) statements that, when applied by the client, will update the subscribed-series tables so that, in effect, the client tables are kept synchronized with the corresponding server tables. The client must regularly download these log files and apply them to remain synchronized with the server.

It is also possible to discontinue one or more subscriptions with the subscribe.py script. Site-specific update files generated after the completion of subscribe.py in this mode will no longer contain any statements relevant to the data series dropped from subscription. The un-subscription process does not remove from the client DRMS database the data for the data series being removed. The data series remain intact and function normally in every way; but they will become static snapshots of the server series at the time of un-subscription. Subsequent changes to the corresponding server data series will not propagate to the client.

Before a client can run subscribe.py, several steps must be taken. First, the client must set up an ssh-agent session so that noninteractive ssh-based communications between the client and server can be established. To do this, follow the directions in SSHKeyNotes, ensuring that you do create a passphrase. Second, the client must edit a configuration file that contains server and client information, such as the server's host machine and port, the server account that ssh uses, the path to the client's scp command, client directories to receive the slony logs, etc. We provide a documented template configuration file at base/drms/replication/etc/repclient.template.cfg (see the section entitled Configuration File below for more information). This configuration file is used for all client-side programs/scripts, including subscribe.py.

After completing the set-up, the client operator runs subscribe.py by providing three arguments: the path to the configuration file (Configuration File), the request type ('subscribe', 'resubscribe', or 'unsubscribe'), and the data series (e.g., hmi.V_45s).

Configuration File

The configuration file contains configuration information needed by several client-side programs and scripts (subscribe_series, get_slony_logs.pl). It consists of several key-value pairs separated by commas (values appropriate when the Stanford JSOC is the server are included in parentheses):

node The name of the subscription client. Must be globally unique across all NetDRMS sites. This string will be used in various state files and in file/directory names. kRSServer The full domain name of the server (solarport.stanford.edu) kRSUser The account on kRSServer that will be used for data transfer (jsocexp) kRSTriggerDir The directory accessible from the kRSUser account where data files will be staged (/data/pgsql/slon_logs/live/triggers/) kRSPort The port on kRSServer that will be used for data transfer (55000) pg_host The client machine that hosts the client PostgreSQL database that will contain the replicated data series pg_port The port on the pg_host machine that will be used for communication with the data-series database pg_user The PostgreSQL user that will own the replicated series (slony) pg_dbname The name of the PostgreSQL database that resides on pg_host slony_cluster The name of the Slony cluster to which this node belongs (jsoc) kLocalLogDir The client directory that will contain log files kLocalWorkingDir The client directory that will contain temporary working files kSQLIngestionProgram The path to the script/program that will ingest the site-specific slony logs — usually the path to get_slony_logs.pl kDeleteSeriesProgram The path to the program delete_series used to delete DRMS data series on the client attempts The number of attempts that the client should make when looking for the flag file indicating the sql file is ready on the server ingestion_path The local directory that will contain the ingestion "die" file — used by get_slony_logs.pl scp_cmd The absolute path to the client's scp program. ssh_cmd The absolute path to the client's ssh program. rmt_slony_dir The absolute path, accessible from the kRSUser account, on the server to the directory that contains the site-specific slony logs (/data/pgsql/slon_logs/live/site_logs) slony_logs The client directory that will contain the downloaded site-specific slony logs PSQL The path to the client's psql program, and any flags needed to run psql as the pg_user user, like -h pg_host email_list The email account to which error messages will be sent

Cleaning Up

To clean the subscription client environment, the following must be performed by the database user identified by the pg_user parameter of the configuration file (if you have used the suggested user, then this is 'slony':

% delete_series [ -k ] <series> JSOC_DBUSER=slony

% psql -h <db server host> -p 5432 -U slony data
data=# DROP SCHEMA <schema> CASCADE;

DELETE FROM admin.ns WHERE name = '<schema>';

% psql -h <db server host> -p 5432 -U slony data
data=# DROP SCHEMA _jsoc CASCADE;

To clean the subscription server environment:

Re-subscribing

You might occasionally need to re-subscribe to a series. There could be bugs at the JSOC or in your site's NetDRMS that result in your site missing some DRMS records (or having extra records). An easy way to reconcile these differences is to unsubscribe from the series, then to re-subscribe to the series. Right after you re-subscribe, your mirrored series will match the JSOC version exactly.

If you unsubscribe from a series by running the appropriate subscribe_series command:

subscribe_series <path>/repclient.cfg <path>/subscribe_list.cfg <path>/.ssh-agent_rs

where subscribe_list.cfg contains this line:

hmi.someseries                    unsubscribe

and you answer 'Y' to this prompt:

Would you like to delete series hmi.someseries? (Y/N)

then subscribe_series will delete all SUMS Storage Units (SUs) that you have ingested into your SUMS. subscribe_series deletes the series and all associated SUs by calling a program, delete_series, that is part of your NetDRMS. But, since you are going to want to re-subscribe to the series, it would be nice to keep the SUs in your SUMS so that they survive the re-subscription process and do not need to be re-downloaded. So, if your goal is to keep the SUs, but re-subscribe to a series, do not use this method as-is.

There are two methods that can be used to unsubscribe from a series without deleting the SUs until I can add a new option that causes SUs to be preserved:

  1. Manually edit your subscribe_series script. In the DeleteSeries() function, modify the following line:

    echo "yes"$'\n'"yes"$'\n' | "$delseriesprog" "$series" JSOC_DBUSER=slony 


    to

    echo "yes"$'\n'"yes"$'\n' | "$delseriesprog" "-k" "$series" JSOC_DBUSER=slony


    Add a "-k" (lower-case k) to the command line to the program delete_series. After you do this, then you can run subscribe_series with an unsubscribe line in the subscribe_list.cfg file. Remember to revert your subscribe_series program back to the way it was when you have finished unsubscribing.

  2. Run subscribe_series with an unsubscribe parameter in the subscribe_list.cfg file as you normally would, BUT answer "N" to this prompt:

    Would you like to delete series hmi.someseries? (Y/N) 


    Then after the unsubscription process has completed, run this program:

    > delete_series -k hmi.someseries  JSOC_DBUSER=slony 

At this point, you will have successfully unsubscribed from hmi.someseries, but you will still have the SUs associated with that series in your SUMS. The next step is to subscribe to the series you just unsubscribed from:

> subscribe_series <path>/repclient.cfg <path>/subscribe_list.cfg <path>/.ssh-agent_rs

where subscribe_list.cfg contains this line:

hmi.someseries                    subscribe

Finally, if you use the JMD, you will need to set-up the hmi.someseries database table so that the JMD knows to fetch SUs for newly ingested records. To do that, run:

> sunum_queue_trigger_sampled.pl <arguments>

Unfortunately, I have never run this program so I do not which arguments should be provided, but the good news is that since you had the JMD fetching SUs for hmi.someseries before the unsubscription occurred, you ran sunum_queue_trigger_sampled.pl before. Use the same command you did the last time you subscribed to hmi.someseries.