This wiki page is intended to instruct remote DRMS users who want to get data from one of our exported Slony data clusters.

Right now, the official cluster list includes: BRmdi_mag containing table:

The following instructions assume you are a slony-log-shipping receiver, or want to be one. For more information, please read: ["SlonyI"], and [http://jsoc.stanford.edu/netdrms/]. If you want to be one and we don't know about you yet, please contact either Rick Bogart or Jennifer Spencer at Stanford. This document assumes you can scp/ssh to and from solarport.stanford.edu. If you cannot do so, you will need to contact us before proceeding. To regularly pull Slony logs will require that you have a key on our system and we prefer you use HPN-SSH, as it is more efficient than regular SSH/SCP (it can be configured to encrypts the password and command exchange and not the data stream).

Our Slony logs are presently kept at /scr21/jennifer/slony_logs/ under subdirectories by cluster name. The initial JSD and slony1_dump.sh output needed to setup your cluster are found in those subdirectories as well.

scp username@solarport.stanford.edu:/scr21/jennifer/slony_logs/mdi_mag/mdi.fd_M_96m_lev18.jsd .

Then if you do not have a database user "slony", you'll need to create one. Go into your postgres database as user 'postgres' (this document assumes you have the required password). Then create your slony admin-level user. This is the user that will handle running all your slon processes should you ever change from Slony log-shipping mode to "classic" mode, and you do not need to have a Linux/Unix login for this user.  create user slony with admin postgres password 'slonypassword'  where 'slonypassword' is a unique password for your system. Then make sure that the password for slony is in user postgres' .pgpass file if you are using md5 security. If user postgres does not have a .pgpass file, you'll need to make one in the postgres home directory. For example:

cat ~/.pgpass
postgreshostname:*:*:slony:slonypassword

Don't forget to make sure the permissions are correct for this file: chmod 600 ~/.pgpass

echo JSOC_DBHOST
echo JSOC_DBNAME
echo JSOC_DBUSER

You should get the correct hostname for the database machine, your local JSOC database name, and the user 'slony'.  If not, set these variables for this session:

$ export JSOC_DBHOST=hostname.stanford.edu
$ export JSOC_DBNAME=myjsoc_db
$ export JSOC_DBUSER=slony

(setenv for you cshell users - the above is for BASH)

aia
aia_ground
ds_mdi
dsds
hmi
hmi_ground
jsoc
mdi
sdo
sdo_dev
sdo_ground
sha
sid_awe

Create your namespace using the DRMS program masterlists: masterlists dbuser=slony namespace=mdi nsgrp=user 

Owner

May be set to any individual Postgres user on your system, but should be slony if mirror updates are to be handled by slony processing. N.B.: This field in the JSD file is ignored by the create_series program; the owner of the series will be the user running the program, unless changed by the Postgres administrator.

Retention

Default retention time in days for data segments on SUMS disk cache for records in the series; data segments will be flushed no sooner than this many days after having been staged. Note: this value, and the others relating to SUMS segments and archiving, is irrelevant if there are no data segments associated with the series, i.e. if all the data are in the DRMS database.

Archive

Check only if you have a tape archiving system for SUMS and wish to archive the data segments in the series

Tapegroup

only applicable for SUMS with tape archive capability; controls which tapes data segments from the series are archived to.

Use the DRMS create_series module. {{{create_series -f mdi.fd_M_96m_lev18.jsd JSOC_DBUSER=slony

Creating new series 'test1.fd_M_96m_lev18'... NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "fd_m_96m_lev18_pkey" for table "fd_m_96m_lev18" }}}

Go get the cluster setup data. scp username@solarport.stanford.edu:/scr21/jennifer/slony_logs/mdi_mag/mdi_mag_slony_setup.sql .

psql -Uslony myjsoc_db -f mdi_mag_slony_setup.sql You should some output from this, but not a whole lot. This script will install two tables and three functions as detailed in SlonyDumpFuncs, and it will copy over the data in the cluster into your new series.

To see if it worked, you can enter postgres and check for new tables and functions: {{{psql myjsoc_db -U slony \dt _mdi_mag. (should return two tables, sl_archive_tracking and sl_sequence_offline)

select count(*) from mdi.fd_m_96m_lev18; should return over 65,000 records. }}}

psql -Uslony myjsoc_db -c 'select at_counter from _mdi_mag.sl_archive_tracking;'
This should return a number, e.g. 872 or 100043.  This number is the current log number for this cluster.  

Create a "slony_counter.txt" file wherever you have dedicated a space for your slony logs to arrive for this particular cluster, (e.g. /home/slony/logs/mdi_mag/slony_counter.txt ) and add the number 872 or 100043 - whatever was returned above - as the slony counter.

Make sure you have the get_slony_logs.pl file in your DRMS code JSOC/scripts directory. If not, you may need to update your code. Edit that file's variables to be sure it accurately reflects your environment.

Edit the crontab and add */1 * * * * (. ~/.ssh-agent_rs;/home/slony/Scripts/get_slony_logs.pl mdi_mag >> /home/slony/Scripts/get_slony_logs.mdi_mag.log 2>&1 )

Please enter any additional tips and help to other users here in this wiki.