Differences between revisions 4 and 5
Revision 4 as of 2008-04-23 07:20:15
Size: 9345
Editor: tucano
Comment:
Revision 5 as of 2008-04-23 07:21:03
Size: 9341
Editor: tucano
Comment:
Deletions are marked like this. Additions are marked like this.
Line 18: Line 18:
 A Warm Standby preserves all user account settings, all database meta-data, and other system information. Schema changes are automatically included. A replica will not incorporate any user information or schema changes (e.g. "create series" and "delete series") unless specifically directed to do so.
 . A Warm Standby preserves all user account settings, all database meta-data, and other system information. Schema changes are automatically included. A replica will not incorporate any user information or schema changes (e.g. "create series" and "delete series") unless specifically directed to do so.
Line 22: Line 21:
 A replica is available for use even if it is reading in logs.

A standby is exactly that: standing by. A "feature" of Postgres is that while it is reading in transaction logs (different from Slony-1 logs), it is unavailable to all users. It considers the database in "recovery" or "standby" mode while it is reading transaction logs. To keep the standby up-to-date, it must constantly read any available transaction logs. Technically, there may be breaks where it is available because it's finished reading the current log, but don't count on it.
 . A replica is available for use even if it is reading in logs. A standby is exactly that: standing by. A "feature" of Postgres is that while it is reading in transaction logs (different from Slony-1 logs), it is unavailable to all users. It considers the database in "recovery" or "standby" mode while it is reading transaction logs. To keep the standby up-to-date, it must constantly read any available transaction logs. Technically, there may be breaks where it is available because it's finished reading the current log, but don't count on it.
Line 51: Line 47:
 Please see above description "A Bit about Slony-1"
 . Please see above description "A Bit about Slony-1"
Line 55: Line 50:
 If the master DB is down, a different server with a different node would have to become the new master. This is a well-defined task in the Slony-1 literature. During a re-boot, the node would be unavailable, and no logs would be created. Slony-1 is a separate demon, so it will try and try and try until it can able to communicate with the required node. Otherwise it will gracefully fail to communicate.
 . If the master DB is down, a different server with a different node would have to become the new master. This is a well-defined task in the Slony-1 literature. During a re-boot, the node would be unavailable, and no logs would be created. Slony-1 is a separate demon, so it will try and try and try until it can able to communicate with the required node. Otherwise it will gracefully fail to communicate.
Line 59: Line 53:
  Slony-1 will not allow users to directly add to an existing replication set while it is running. Instead, one must create a new cluster, add the table(s) in question, and then merge that new cluster with the existing one. Additionally, Slony-1 will not automatically create any tables or propagate schema changes. Those must be handled outside of Slony-1 or it will not be able to process data pertaining to the tables/changes in question. Scripts are being written at the JSOC to handle these changes for Slony-1 automatically. These scripts would be run after a user has decided that this data set needs to be included in a replication set, not just as a new series in JSOC.
 . Slony-1 will not allow users to directly add to an existing replication set while it is running. Instead, one must create a new cluster, add the table(s) in question, and then merge that new cluster with the existing one. Additionally, Slony-1 will not automatically create any tables or propagate schema changes. Those must be handled outside of Slony-1 or it will not be able to process data pertaining to the tables/changes in question. Scripts are being written at the JSOC to handle these changes for Slony-1 automatically. These scripts would be run after a user has decided that this data set needs to be included in a replication set, not just as a new series in JSOC.
Line 63: Line 56:
 Replica DBs should require maintenance regularly. The changes being made to replica databases are done via SQL commands, just as they are on the original master host. If we clean up the master host, there is no way for the replica to do the same (those clean-up commands are outside the purview of Slony-1 programming). Vacuuming is done by Slony-1 at every log-ingest, at least, but for major maintenance, it is recommended that all replicas consider their own maintenance needs and plan accordingly. Maintenance will cannot be propagated remotely.
 . Replica DBs should require maintenance regularly. The changes being made to replica databases are done via SQL commands, just as they are on the original master host. If we clean up the master host, there is no way for the replica to do the same (those clean-up commands are outside the purview of Slony-1 programming). Vacuuming is done by Slony-1 at every log-ingest, at least, but for major maintenance, it is recommended that all replicas consider their own maintenance needs and plan accordingly. Maintenance will cannot be propagated remotely.
Line 67: Line 59:
  Table schema information will be handled as above, in point #3. Role information, user information, and namespace information will require additional small scripts similar to those written for step #3, above.
 . Table schema information will be handled as above, in point #3. Role information, user information, and namespace information will require additional small scripts similar to those written for step #3, above.
Line 71: Line 62:
 No. You must subscribe to the entire series. The quanta of Slony-1 is the database table. A series is one database table.

Theoretically, a replica could create their own script to parse the Slony-1 log before it was ingested, and remove "irrelevant" entries. This is not advised as it could perhaps cause referential integrity issues. Instead, it may be better to delete "irrelevant" data from the series more directly via scripts run on Postgres. Either way, removal of data is up to the replica site and will not be supported by the JSOC.
 . No. You must subscribe to the entire series. The quanta of Slony-1 is the database table. A series is one database table. Theoretically, a replica could create their own script to parse the Slony-1 log before it was ingested, and remove "irrelevant" entries. This is not advised as it could perhaps cause referential integrity issues. Instead, it may be better to delete "irrelevant" data from the series more directly via scripts run on Postgres. Either way, removal of data is up to the replica site and will not be supported by the JSOC.
Line 77: Line 65:
Line 78: Line 67:
Line 79: Line 69:

Slony-1 Documentation

[http://slony.info/ Slony-I Docs]

A Bit about Slony-1

Slony-1 is a database replication engine used for Postgres databases. There are others, but so far Slony-1 seems to be the leader. The JSOC will be running Slony-1 to handle remote DRMS replicas and possibly for other in-house tasks.

There are two different ways to implement Slony-1.

One is a traditional method, where each database to be replicated has Slony-1 fully installed and each cluster is designated with a node number. Each node communicates with the other nodes as an equal, though for data items, an "origin", or master, is defined. Running in this way requires each Slony-1 node to have database superuser privileges in order to talk to each other. If all nodes are in-house, this superuser access is less of a concern. For the JSOC, with nodes distributed worldwide, it would be more of a security risk.

The second Slony-1 implementation method is called "log shipping". Log shipping requires no external systems to have database superuser privileges. To use the log-shipping method, first decide if you wish to be a passive receiver or a log shipper. JSOC will be a log-shipper. To generate logs, you must have already installed Slony-1 fully, and designate one cluster a master node. Then you must create a slave node to receive the master's replications, as a regular Slony-1 node (above). The words master and slave are used here only to give an idea of which node is changing data. In Slony-1 parlance, all nodes are equal, but JSOC will designate a master and a slave. Now, when there is a traditional Slony-1 master node replicating to a slave node, the slave node can be asked to log every change that the master instructed. Those log files are a series of SQL commands that the slave was directed to make by the master. A specific "slon" demon is run to generate those logs. It will be necessary to customize one slon demon per remote DRMS/replica since each one will likely have different data needs. If we direct a third database to carry out those logged SQL commands, it will, and in effect replicate the master's commanded changes. The log files can be shipped to other systems as desired. If a remote system (a "replica" or "remote DRMS") desires only to read (not write) these logs, they have only to run a basic script. The script, slony1_dump,sh (received from the master node upon request, and customized for each installation) installs two very basic tables and two functions in a new namespace, and then uploads the full data available for the tables desired. Further scripts are necessary to keep a steady flow of data going automatically between the master and the remote DRMS, but these are straightforward enough. If a remote replica/DRMS wishes to ship new changes back to the JSOC, they would have to install a traditional Slony-1 setup with a master and log-shipping slave node, and then only ship new data back to the JSOC (e.g. not something the JSOC is already sending as a log to the remote DRMS site).

In a traditional Slony-1 setup, changes are replicated across nodes very quickly, at least once per minute. In log-shipping, a new log file is generated at least once per minute. The size of the log file for a small database with no new activity is ~800K. A log file directing the insertion of 100,000 records into a table with five columns was 9 Mb. The speed of log shipping replication is determined by how often remote users download their logs, the amount of activity in the data tables in the cluster, and network traffic.

Difference Between a Slony-1 Replica and a Warm Standby Server

1. Scope: per-table vs whole database

  • A Warm Standby preserves all user account settings, all database meta-data, and other system information. Schema changes are automatically included. A replica will not incorporate any user information or schema changes (e.g. "create series" and "delete series") unless specifically directed to do so.

2. Availability: available vs standby

  • A replica is available for use even if it is reading in logs. A standby is exactly that: standing by. A "feature" of Postgres is that while it is reading in transaction logs (different from Slony-1 logs), it is unavailable to all users. It considers the database in "recovery" or "standby" mode while it is reading transaction logs. To keep the standby up-to-date, it must constantly read any available transaction logs. Technically, there may be breaks where it is available because it's finished reading the current log, but don't count on it.

Replication Lag Requirement

From Phil's notes:

1. Access to quicklook data:

  • For Ops at LMSAL should be with lag of few minutes from real time. For NOAA, JSC, Air Force, etc., can be c. 5-10 minutes probably.

2. Clone for web access:

  • This can have a lag of 10s of minutes, maybe hour or more. Except for quicklook images movies of certain series. I wonder if we prepare some data for general purpose wide access via web and move that to a web server that does not have the full database access. That could be done in parallel with a full access clone of drms that has a lag and is used for searches of archived data for export.

3. Remote replicas:

Locations of Remote Replicas

0. LM

1. MSSL

2. Germany

3. SAO

4. India

Test Tasks

1. Log shipping

  • Please see above description "A Bit about Slony-1"

2. What happens when DB server goes down and when DB server reboots

  • If the master DB is down, a different server with a different node would have to become the new master. This is a well-defined task in the Slony-1 literature. During a re-boot, the node would be unavailable, and no logs would be created. Slony-1 is a separate demon, so it will try and try and try until it can able to communicate with the required node. Otherwise it will gracefully fail to communicate.

3. How to add tables to replication set while slon daemon is running?

  • Slony-1 will not allow users to directly add to an existing replication set while it is running. Instead, one must create a new cluster, add the table(s) in question, and then merge that new cluster with the existing one. Additionally, Slony-1 will not automatically create any tables or propagate schema changes. Those must be handled outside of Slony-1 or it will not be able to process data pertaining to the tables/changes in question. Scripts are being written at the JSOC to handle these changes for Slony-1 automatically. These scripts would be run after a user has decided that this data set needs to be included in a replication set, not just as a new series in JSOC.

4. Replica DB maintenance? e.g., rebuild index, vacuum, etc.

  • Replica DBs should require maintenance regularly. The changes being made to replica databases are done via SQL commands, just as they are on the original master host. If we clean up the master host, there is no way for the replica to do the same (those clean-up commands are outside the purview of Slony-1 programming). Vacuuming is done by Slony-1 at every log-ingest, at least, but for major maintenance, it is recommended that all replicas consider their own maintenance needs and plan accordingly. Maintenance will cannot be propagated remotely.

5. How to copy table schema and role information

  • Table schema information will be handled as above, in point #3. Role information, user information, and namespace information will require additional small scripts similar to those written for step #3, above.

6. Is there a way to subscribe to part of a series?

  • No. You must subscribe to the entire series. The quanta of Slony-1 is the database table. A series is one database table. Theoretically, a replica could create their own script to parse the Slony-1 log before it was ingested, and remove "irrelevant" entries. This is not advised as it could perhaps cause referential integrity issues. Instead, it may be better to delete "irrelevant" data from the series more directly via scripts run on Postgres. Either way, removal of data is up to the replica site and will not be supported by the JSOC.

Procedure to Become a Replica

1. Decide whether you wish to simply receive logs, or whether you wish to receive logs and send logs back to the JSOC or another replica. The former is easier for you, but could be limiting. If you want to decide later, and just start with receiving logs, that's fine. You will not lock yourself out of log-shipping later.

2. You will need to understand basic SQL, basic shell scripting, and you will need to have Postgres v8.3 or higher installed. It is recommended that you have some basic understanding of how Postgres works, how to log into it, and how to check its health.

3. Figure out what data sets you would like to receive and whether you have the space to do so. Make a list of those data sets, by namespace.series_name. Soon, the JSOC team will be able to respond to your request and make a slon demon for your job. If you are already a replica site receiving Slony-1 logs from JSOC, then you need to check with the person who is managing that process first.

4. More To Be Determined as we at JSOC progress with implementing Slony-1 log shipping.

JsocWiki: SlonyI (last edited 2013-05-01 04:35:24 by localhost)