This table lists project name allocations for use in the JSOC DRMS.
Series names are logically the concatenation of a project name and a data descriptor name. The "." character seperates the project name from the data name. At present both name parts must be present although we may in the future allow the project name to default to the user's current namespace. An example might be "hmi.fd_V" for full disk velocity from HMI. A different dataseries would be "su_phil.fd_V" which could be used to contain similar data during development or testing some variation on computation.
The project names are implemented as PostgreSQL database "namespace" names. The namespace name serves to provide an isolated set of database tables for each user or project. The database system provides protection based on the current namespace for each user or user membership in a group which has access to a given namespace. Thus we can have a group for HMI production with, say, five members. Each of those members can create or add to series in the HMI namespace. They can also create or add series in their own namespace or other namespace groups (such as MDI). A user can not make changes to, create series in, or add records to series in namespaces where they are not in an associated group or are not the owner. This gives a clean isolation of user work areas from each other and from the primary archives.
The term project name will be used synonymously with namespace name. From the user's perspective project names serve the same role as the program name in the MDI DSDS system (i.e. the part after "prog:"). The syntax for project names uses the same rules as a data descriptor name: letters, digits, and underscores. The first character must be a letter. Names are case sensitive but must be unique independent of case. It will be convenient to keep project names short.
Each user has a default project which is derived from the login name to the database. The database login will usually be the same as the user's login name to the computer. The login name will be prefixed by the user's institution or site name, so the namespace naming convention is : siteID_userID. We will use 'su' for Stanford, 'lm' for LMSAL, etc. Thus su_phil will be the project name for user phil at Stanford. Major projects will omit the site prefix. E.g. hmi, mdi, aia will be the main archive namespace names for those projects.
Data names consist of simple ascii letters, digits and the underscore ("_"). The first character must be a letter. Names are case-sensitive. The project name and data name must be unique independent of case.
Thus a dataseries name is:
<dataseries name> ::= <project name>.<data name>
This table is the present set of allowed project names. The data name parts subsequent to the project name are under the control of the project name manager identified in this table. The SUMS storage group identifier ranges are loosely associated with the project names and are assigned by the same name space manager as the data names within each project and are shown in another place...
Reserved Namespaces
Namespace Prefix |
Owning Site |
Owning DB Role |
Description |
aia |
JSOC |
sdodata |
AIA instrument flight data, production archive |
aia_ground |
JSOC |
sdodata |
AIA instrument ground testing (pre-launch) |
cfa_* |
Harvard-Smithsonian Center For Astrophysics |
|
Prefix for all CFA user namespaces |
cora_* |
Colorado Research Associates, NWRA |
|
Prefix for all CORA user namespaces |
cu_* |
University of Colorado |
|
Prefix for all University of Colorado user namespaces |
ds_mdi |
JSOC |
production |
TBD copies of DSDS datasets. The prog:level:series name parts would follow with "_" delimiters. |
dsds |
JSOC |
dsdsdata |
DRMS versions of all DSDS data |
hmi |
JSOC |
sdodata |
HMI instrument flight data, production archive |
hmi_ground |
JSOC |
sdodata |
HMI instrument ground testing (pre-launch) |
jpl_* |
Jet Propulsion Labs |
|
Prefix for all JPL user namespaces |
jsoc |
JSOC |
production |
Administrative data,DRMS Log files in DRMS_SESSION tables. |
kis_* |
Kiepenheuer Institut für Sonnenphysik, Freiburg, Germany |
|
Prefix for all KIS user namespaces |
lm_* |
LMSAL |
|
Prefix for all Lockheed Martin user namespaces |
mdi |
JSOC |
dsdsdata |
DRMS versions of MDI data |
mps_* |
Max-Planck-Institut für Sonnensystemforschung |
|
Prefix for all Max Planck user namespaces |
nso_* |
National Solar Observatory |
|
Prefix for all NSO user namespaces |
oca_* |
L'Observatoire de la Cote d'Azur |
|
Prefix for all OCA user namespaces |
rob_* |
Royal Observatory of Belguim |
|
Prefix for all ROB user namespaces |
sdo |
JSOC |
sdodata |
SDO instrument flight data, e.g. housekeeping and attitude data, production archive |
sdo_dev |
JSOC |
sdodata |
SDO instrument code development |
sdo_ground |
JSOC |
sdodata |
SDO instrument ground testing (pre-launch) |
sha |
Stanford University |
rick |
Stanford Helioseismology Archive |
sid_awe |
Stanford University SID & AWESOME projects |
epodata |
|
su_* |
JSOC, Stanford University |
|
Prefix for all Stanford University user namespaces, e.g. su_phil. Individual project names. User login name follows the site name. All DRMS users have implied project names using this convention. |
vso |
VSO |
|
Virtual Solar Observatory and related data servers |
wso |
John Beck |
|
WSO data |
yale_* |
Yale University |
|
Prefix for all Yale University user namespaces |
This table lists Tape Group allocations for use in the JSOC DRMS.
Tape groups are specified in the main series table by the Tapegroup variable in the JSD file. The tapegroup information is used only at the time the SUMS Storage Units (SU) are written to tape. The purpose of the tapegroup is to cluster similar data onto fewer tapes. This serves several purposes:
- Low level data such as raw telemetry data will migrate to the shelf and stay there. If more frquently used data is intermixed on a tape with seldom used data there will be more churning of archive tapes.
Many of the higher level helioseismology data products grow with time but are needed at one time for a long span of observed time. Retrieval time for say a 5-year time series of a small product (e.g. l=2 spherical harmonics) will be stored in thousands of SUs. If they are intermixed with level0 data for example, it would require mounting thousands of tapes to retrieve a few GB of data.
These two examples are sufficient reason to use tape groups. On the other hand there is some benefit to having large data products that will not be kept online distributed on more than one tape per day. The benefit of this is to reduce the read time by allowing parallel reads on multiple drives when a span of a half day to a few days are retrieved. Current estimates are that it may take two or three hours to retrieve all the data a given tape. Even having two or three drives sharing the load would help balance the load between computation and retrieval times. For this reason it might be reasonable to mix HMI and AIA level0 data in the same tape group. Since we expect the tape access to HMI data will be rare, this is mostly an issue for AIA data where only selected times/regions will be maintained online.
In addition to Tapegroup control there is a list of how long each tapegroup's tapes should be retained near-line. This list is maintained as part of SUMS administration. Also, individual tapegroups are assigned one of a few different SUM_sets. These SUM_sets correspond to distinct collections of SUMS disk partitions on which the corresponding online data are to be stored.
The following table lists the current tapegroup numbers and desired near-line retention times. See the SUMS Archive Status Table for group number assignment for tape archiving and SUM_set assignments. NOTE: Near-Line Days are due to be revised
Tapegroup |
Near-Line Days |
Use |
0 |
N.A. |
Not assigned (NOTE:people have used this) |
1 |
365 |
Catch-all group, use for everything not listed below |
1 |
|
sid_awe - SID and AWESOME monitor data |
2 |
60 |
hmi_ground, hmi.lev0* |
3 |
60 |
aia_ground, aia.lev0* |
4 |
1 |
hmi_ground, hmi.tlm* |
5 |
1 |
aia_ground, aia.tlm* |
6 |
30 |
hmi above lev1 |
7 |
30 |
aia above lev1 |
8 |
7 |
sha - Stanford Helioseismology Archive, large imported data series |
9 |
30 |
dsds. Migrated datasets from DSDS |
10 |
30 |
hmi lev1 |
11 |
30 |
sid_awe.awesome |
12 |
30 |
sid_timh.awesome |
30 |
30 |
primary MDI data products: mdi.{fd_V, fd_M, vw_V, etc.} |
31 |
30 |
additional MDI observables |
32 |
30 |
low level MDI data and calibration data |
33 |
30 |
high-level MDI data products |
34 |
30 |
Reserved for ingesting MDI data into DRMS |
35 |
30 |
Reserved for ingesting MDI data into DRMS |
36 |
30 |
Reserved for ingesting MDI data into DRMS |
37 |
30 |
Reserved for ingesting MDI data into DRMS |
38 |
30 |
Reserved for ingesting MDI data into DRMS |
39 |
30 |
Reserved for ingesting MDI data into DRMS |
100 |
N/A |
aia.lev1 |
102 |
60 |
hmi.lev0_60d |
103 |
60 |
aia.lev0_60d |
104 |
1 |
hmi.tlm_60d |
105 |
1 |
aia.tlm_60d |
200 |
1 |
dsds_bak (mdi lev0 extra bkup to tape) |
310 |
1 |
hmi.rdVtrack_fd05, hmi.rdVtrack_fd15, hmi.rdVtrack_fd30 |
311 |
1 |
hmi.rdVpspec_fd05, hmi.rdVpspec_fd15, hmi.rdVpspec_fd30 |
320 |
1 |
hmi.tdVtrack_synopHC, hmi.tdVtimes_synopHC |
10000 |
N/A |
aia.lev1 (obsolete. Now group 100) |
NOTE: The group 10 hmi.lev1 starts on its own tapes (w/o > hmi.lev1) on Apr 4, 2011
SUM_ARCH_GROUP
This is used by tape_do_archive.pl to keep track of when to start another archive to tape for a group number. Also used by create_series to verify group number assignments.
Distribution of group data in SUM storage:
http://sun.stanford.edu/~jsoc/SUM/sumlookgroup (last report)
http://sun.stanford.edu/~jsoc/SUM/sumlookgroup.prev (previous report)
Other reports (ignore, these are outdated):
Storage Unit "delete-pending" disposition, sorted by group then series name
Storage Unit "archive-pending" disposition, sorted by group then series name