Differences between revisions 40 and 41
Revision 40 as of 2009-08-15 03:53:32
Size: 14596
Editor: equinox
Comment:
Revision 41 as of 2009-08-15 03:54:42
Size: 14707
Editor: equinox
Comment:
Deletions are marked like this. Additions are marked like this.
Line 69: Line 69:
[[BR]]
[[BR]]
Once you can correctly log in to your database as user slony, proceed to the next step below.

Becoming a log-shipping receiver for a Slony cluster

This wiki page is intended to instruct remote DRMS users of our exported Slony data clusters. If you're reading this, we assume you want to get a dynamic set of our data and regularly pull updates from us. We assume you have system resources set aside, a postgres database set up, and a copy of the JSOC DRMS code in place. A basic understanding of SSH/SCP will be useful. Pulling Slony logs requires you to have certain trusted privileges on our system via ssh keys, so we want to know who you are. If we don't know you yet, or this is your first time setting up a remote DRMS, please contact either Rick Bogart or Jennifer Spencer at Stanford (see the ["Contacts"] page). BRBR For more information, please read: ["SlonyI"], and [http://jsoc.stanford.edu/netdrms/]. Please note there is more to becoming a remote DRMS than pulling Slony logs. Please read the NetDRMS documentation (link above) for more information if needed. For more on dealing with SSH keys and setting up a remote SUMS, please see: [http://jsoc.stanford.edu/netdrms/rmtsums.html].

The following document assumes you have already set up scp/ssh to and from solarport.stanford.edu. If you cannot do so, you will need to contact us before proceeding. To regularly pull Slony logs will require that you have a key on our system and we prefer you use HPN-SSH, as it is more efficient than regular SSH/SCP (it can be configured to encrypts the password and command exchange and not the data stream).


Each Slony cluster represents a discrete set of JSOC data. Pulling logs and data must be done by cluster. It is our intention to have several clusters, each tailored to suit as many users as possible. A Slony cluster is not divisible; we can only replicate our data per cluster, not per user, so you may expect to subscribe to more than one cluster at a time. These instructions are for working with one cluster, and are meant to be extrapolated for use with many clusters.

The official available cluster list:

Cluster name

Namespace.Seriesname

mdi_mag

mdi.fd_m_96m_lev18

Our Slony logs are presently kept at /scr21/jennifer/slony_logs/ under subdirectories by cluster name. The initial JSD and slony1_dump.sh output needed to setup your cluster are found in those subdirectories as well.

Get a Cluster

  • Go get the JSD file(s) for the cluster.

scp username@solarport.stanford.edu:/scr21/jennifer/slony_logs/mdi_mag/mdi.fd_M_96m_lev18.jsd .

Then if you do not have a database user "slony", you'll need to create one. Check and see if you have one in postgres. {{{psql mydatabase... Welcome to psql 8.x.x, the PostgreSQL interactive terminal.

Type: \copyright for distribution terms

  • \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit

jsoc=# \du [to describe users in your database]

  • List of roles
  • Role name | Superuser | Create role | Create DB | Connections | Member of


+


+


+


+


+


  • apache | no | no | no | no limit | {jsoc_reader} arta | no | no | yes | no limit | {dsdsdata,jsoc,sdodata} ash | no | no | no | no limit | {}

...

  • slony | yes | yes | yes | no limit | {}

Will appear as above if you have a user slony in your system. Otherwise, you'll need to add one.

\q}}}

To add the 'slony' user, go into your postgres database as user 'postgres' (this document assumes you have the required password). Then create your slony admin-level user. This is the user that will handle running all your slon processes should you ever change from Slony log-shipping mode to "classic" mode, and you do not need to have a Linux/Unix login for this user. Note that if you are planning on running replication on the same machine as the database server, you may have authentication issues logging in as user 'slony' on the localhost. You may have to tell postgres the hostname (add -h myhost) for all psql commands running on that system if your pg_hba.conf file (in your postgres data directory) is configured with 'ident sameuser' for localhost.

create user slony with admin postgres password 'slonypassword' 

where 'slonypassword' is a unique password for your system. Then make sure that the password for slony is in user postgres' .pgpass file if you are using md5 security. If user postgres does not have a .pgpass file, you'll need to make one in the postgres home directory. For example:

cat ~/.pgpass
postgreshostname:*:*:slony:slonypassword

Don't forget to make sure the permissions are correct for this file: chmod 600 ~/.pgpass

  • Confirm that you can log in as user slony before progressing further. Make sure you are logged into Unix/Linux as the user that has the slony password in its .pgpqss file.

psql -U slony myjsoc_db 

If this does not work, and you are on the same server as is running your database, you may try:

psql -U slony -h myhostserver myjsoc_db 

Further issues are likely due to your pg_hba.conf file settings, which do require a 'reload' of postgres to take effect. See [http://www.postgresql.org/docs/8.3/interactive/auth-pg-hba-conf.html the Postgres web site on this topic for more]. BR BR Once you can correctly log in to your database as user slony, proceed to the next step below.

  • Check your environment variables:

echo $JSOC_DBHOST
echo $JSOC_DBNAME
echo $JSOC_DBUSER

You should get the correct hostname for the database machine, your local JSOC database name, and the user 'slony'.  If not, set these variables for this session:

For Bash users:
$ export JSOC_DBHOST=hostname.stanford.edu
$ export JSOC_DBNAME=myjsoc_db
$ export JSOC_DBUSER=slony

For C shell users:
$ setenv JSOC_DBHOST hostname.stanford.edu
$ setenv JSOC_DBNAME myjsoc_db
$ setenv JSOC_DBUSER slony
  • You will need to create any namespaces in your database referenced by this cluster. In this example, you will need to create an 'mdi' namespace if you do not have one. BR

If you need a namespace similar to an existing reserved namespace, for example to identify analysis or repackaged products based on data from various sources, we recommend that you create one from your institutional identifier and the source identifier, e.g. sha_gong, cu_mdi, mps_hmi, etc. BR Provenance of data is critical to a successful science mission. We must all work together to maintain the integrity of our data namespaces so that data provenance remains unequivocally clear. The JSOC teams reserves the following namespaces, and explicitly forbids remote DRMS users adding any new data to these spaces; only replicated data received from the JSOC is allowed in a JSOC-reserved namespace. To put it in a more general way, any data series in a DRMS with one of these reserved namespaces must have been generated at or replicated from the owning site. We expect that remote DRMS sites will have a similar policy about their own namespace designations, and we encourage them to document accordingly.

Reserved Namespaces

Please see [:DrmsSeriesNames#namespaces] for a complete list of reserved namespaces.

  • Create your namespace using the DRMS program masterlists:

masterlists dbuser=slony namespace=mdi nsgrp=user 

  • Edit the header of the JSD as needed to reflect your particular setup.

BR You will only need to change the header, called "General Series Information", if even that requires a change for your system (it may not). BRDo not change anything beyond the first line of #===Links===== BR If you change any links, keywords or segments, your replication may fail. Or, it may not fail but it may cause mystery display problems. So please leave those alone. That said, here are some of the header fields you may need to change for your system.

#=====General Series Information=====
Seriesname:             mdi.fd_M_96m_lev18
Author:                 "phil"
Owner:                  slony
Unitsize:               15
Archive:                0
Retention:              10000
Tapegroup:              1
PrimeKeys:              T_REC
DBIndex:                T_REC
Description:            "MDI"

#=====Links=====
(stop here!)

Owner

May be set to any individual Postgres user on your system, but should be user slony if mirror updates are to be handled by slony processing. N.B.: This field in the JSD file is ignored by the create_series program; the owner of the series will be the user running the program, unless changed by the Postgres administrator.

Retention

Default retention time in days for data segments on SUMS disk cache for records in the series; data segments will be flushed no sooner than this many days after having been staged. Note: this value, and the others relating to SUMS segments and archiving, is irrelevant if there are no data segments associated with the series, i.e. if all the data are in the DRMS database.

Archive

Set to 1 only if you have a tape archiving system for SUMS and wish to archive the data segments in the series

Tapegroup

only applicable for SUMS with tape archive capability; controls which tapes data segments from the series are archived to.

  • Create the series in the cluster as user slony. That way, database user slony will own the series and they can remain read-only to all other local users.

Use the DRMS create_series module. {{{create_series mdi.fd_M_96m_lev18.jsd JSOC_DBUSER=slony

Creating new series 'mdi.fd_M_96m_lev18'... NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "fd_m_96m_lev18_pkey" for table "fd_m_96m_lev18" }}}

  • Now install the slony cluster

Go get the cluster setup data. scp username@solarport.stanford.edu:/scr21/jennifer/slony_logs/mdi_mag/mdi_mag_slony_setup.sql .

  • Ingest this data in psql

psql -Uslony myjsoc_db -f mdi_mag_slony_setup.sql You should some output from this, but not a whole lot. This script will install two tables and three functions as detailed in SlonyDumpFuncs, and it will copy over the data in the cluster into your new series.

To see if it worked, you can enter postgres and check for new tables and functions: {{{psql myjsoc_db -U slony \dt _mdi_mag. (should return two tables, sl_archive_tracking and sl_sequence_offline)

select count(*) from mdi.fd_m_96m_lev18; should return over 65,000 records. }}}

  • Get ready to start pulling Slony logs

Make sure you have the get_slony_logs.pl file in your DRMS code JSOC/scripts directory. If not, you may need to update your code.

  • Edit get_slony_logs.pl configuration variables so that it accurately reflects your environment. BR

#### CONFIGURATION variables
$Net::SCP::scp="/usr/bin/scp";                           ## path to fast scp instance.
my $rmt_hostname="solarport.stanford.edu";               ## remote machine name
my $rmt_port="55000";                                    ## port to fast scp server in remote machine
my $user="igor";                                         ## user login in remote machine
my $rmt_slony_dir="/scr21/jennifer/slony_logs";          ## directory in remote machine where slony logs get stage
my $slony_logs="/home/slony/logs";                       ## local slony cluster directory area
my $PSQL="/usr/bin/psql -Uslony nso_drms";               ## psql path
my $email_list='igor@igorhost.edu';                          ## list of people to email in case some problem occurs.
#### END of CONFIGURATION variables
###########################################################################
  • Designate a directory where your acquired Slony logs will be stored, with a cluster subdirectory, e.g. /home/slony/slony_logs/mdi_mag (where mdi_mag is the cluster). These logs are not huge files, but there will be a lot of them over time. Slony generates logs every two minutes at least; a log file that documents no change in the system is typically ~700 bytes. But even that reported every two minutes for weeks will add up, so choose a good spot for these logs and keep an eye on the growth. Another script, slony_logs_housekeeping.pl, is available to tar.gz your log files in a given directory, one tar.gz file per 5000 logs (and the file interval is settable). You may wish to cron the housekeeping script to ensure the logs aren't eating up space too quickly, and delete or archive eventually.
  • Make sure that your ssh keys are proper and that the source (Stanford, generally) has your ssh public key in our authorized key file. This may require a phone call or an email to the source host. BR

  • Source your ssh agent file to be sure your environment variables are ready to pull data. Typically, this agent file is named something like /home/mylogin/.ssh-agent_rs or ~/.ssh-agent.csh, etc, where mylogin is the login of the user designated to do the automated slony log retrieval. BR

  • run : ./get_slony_logs.pl mdi_mag

This script assumes you've sourced your .ssh-agent file and will not work properly without doing that first. Typical errors include 'permission denied' or request of a passphrase or password.

The first time you run get_slony_logs.pl, it will create a file called slony_counter.txt (e.g. /home/slony/logs/mdi_mag/slony_counter.txt ). The file contains a number, and that number is the current slony counter for your data set. If you want to double-check the value of the counter.txt file, you can run:

psql -Uslony myjsoc_db -c 'select at_counter from _mdi_mag.sl_archive_tracking;'
This should return a number, e.g. 872 or 100043.  This number is the current log number for this cluster, and should be the number in slony_counter.txt.  

We do not recommend editing the slony_counter.txt file. The get_slony_logs.pl script will manage the value in the counter file, and if there is an error, it will double-check with the database for the correct number.

  • Get your logs with a cron

Edit the crontab and add */1 * * * * (. ~/.ssh-agent_rs;/home/slony/Scripts/get_slony_logs.pl mdi_mag >> /home/slony/Scripts/get_slony_logs.mdi_mag.log 2>&1 )

Please enter any additional tips and help to other users here in this wiki.

JsocWiki: SetupForSlonyData (last edited 2013-05-01 04:35:24 by localhost)