Size: 7390
Comment:
|
Size: 7387
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 27: | Line 27: |
||Stanford projects || ||sid_awe ||Debbie || SID project and AWESOME project || | ||Stanford projects ||sid_awe ||Debbie || SID project and AWESOME project || |
This table lists project name allocations for use in the JSOC DRMS.
Series names are logically the concatenation of a project name and a data descriptor name. The "." character seperates the project name from the data name. At present both name parts must be present although we may in the future allow the project name to default to the user's current namespace. An example might be "hmi.fd_V" for full disk velocity from HMI. A different dataseries would be "su_phil.fd_V" which could be used to contain similar data during development or testing some variation on computation.
The project names are implemented as PostgreSQL database "namespace" names. The namespace name serves to provide an isolated set of database tables for each user or project. The database system provides protection based on the current namespace for each user or user membership in a group which has access to a given namespace. Thus we can have a group for HMI production with, say, five members. Each of those members can create or add to series in the HMI namespace. They can also create or add series in their own namespace or other namespace groups (such as MDI). A user can not make changes to, create series in, or add records to series in namespaces where they are not in an associated group or are not the owner. This gives a clean isolation of user work areas from each other and from the primary archives.
The term project name will be used synonymously with namespace name. From the user's perspective project names serve the same role as the program name in the MDI DSDS system (i.e. the part after "prog:"). The syntax for project names uses the same rules as a data descriptor name: letters, digits, and underscores. The first character must be a letter. Names are case sensitive but must be unique independent of case. It will be convenient to keep project names short.
Each user has a default project which is derived from the login name to the database. The database login will usually be the same as the user's login name to the computer. The login name will be prefixed by the user's institution or site name. We will use 'su' for Stanford, 'lm' for LMSAL, etc. Thus su_phil will be the project name for user phil at Stanford. Major projects will omit the site prefix. E.g. hmi, mdi, aia will be the main archive namespace names for those projects.
Data names consist of simple ascii letters, digits and the underscore ("_"). The first character must be a letter. Names are case-sensitive. The project name and data name must be unique independent of case.
Thus a dataseries name is:
<dataseries name> ::= <project name>.<data name>
This table is the present set of allowed project names. The data name parts subsequent to the project name are under the control of the project name manager identified in this table. The SUMS storage group identifier ranges are loosly associated with the project names and are assigned by the same name space manager as the data names within each project and are shown in another place...
Project |
Prefix |
Owner |
Purpose |
JSOC |
jsoc |
production |
DRMS Log files in DRMS_SESSION tables. |
HMI |
hmi |
sdodata |
Production archive of HMI data |
|
hmi_ground |
sdodata |
HMI ground test data, pre launch |
AIA |
aia |
sdodata |
Production archive of AIA data |
|
aia_ground |
sdodata |
AIA ground test data, pre launch |
SDO |
sdo |
sdodata |
SDO data, e.g. HK & FDS data |
MDI |
mdi |
production |
DRMS versions of MDI data |
|
ds_mdi |
production |
TBD copies of DSDS datasets. The prog:level:series name parts would follow with "_" delimiters. |
VSO |
|
production |
Virtual Solar Observatory and related data servers |
Stanford projects |
sid_awe |
Debbie |
SID project and AWESOME project |
DSDS |
dsds |
production |
DRMS versions of all DSDS data |
|
sha |
|
Stanford Helioseismology Archive |
|
wso |
John Beck |
WSO data |
Stanford users |
su_ |
|
Individual project names. A user identifier follows the site id. All DRMS users have implied project names using this convention. |
|
|
su_jim |
Jim Aloise |
|
|
|
su_phil |
Phil Scherrer |
|
|
|
su_rsb |
Rick Bogart |
|
|
|
su_... |
etc. |
|
|
LMSAL users |
lm_ |
TBD |
Individual project names. The users login name follows the site name. |
|
NSO users |
nso_ |
login owner |
|
|
Univ Colorado users |
cu_ |
|
Individual projects: siteID_userID |
|
|
cu_dah |
Deborah Haber |
|
This table lists Tape Group allocations for use in the JSOC DRMS.
Tape groups are specified in the main series table by the Tapegroup variable in the JSD file. The tapegroup information is used only at the time the SUMS Storage Units (SU) are written to tape. The purpose of the tapegroup is to cluster similar data onto fewer tapes. This serves several purposes:
- Low level data such as raw telemetry data will migrate to the shelf and stay there. If more frquently used data is intermixed on a tape with seldom used data there will be more churning of archive tapes.
Many of the higher level helioseismology data products grow with time but are needed at one time for a long span of observed time. Retrieval time for say a 5-year time series of a small product (e.g. l=2 spherical harmonics) will be stored in thousands of SUs. If they are intermixed with level0 data for example, it would require mounting thousands of tapes to retrieve a few GB of data.
These two examples are sufficient reason to use tape groups. On the other hand there is some benefit to having large data products that will not be kept online distributed on more than one tape per day. The benefit of this is to reduce the read time by allowing parallel reads on multiple drives when a span of a half day to a few days are retrieved. Current estimates are that it may take two or three hours to retrieve all the data a given tape. Even having two or three drives sharing the load would help balance the load between computation and retrieval times. For this reason it might be reasonable to mix HMI and AIA level0 data in the same tape group. Since we expect the tape access to HMI data will be rare, this is mostly an issue for AIA data where only selected times/regions will be maintained online.
In addition to Tapegroup control there is a list of how long each tapegroup's tapes should be retained near-line. This list is maintained as part of SUMS administration.
The following table ia a first cut at tapegroup numbers and desired near-line retention times.
Tapegroup |
Near-Line Days |
Use |
0 |
N.A. |
Use 0 for non-archived dataseries |
1 |
365 |
Catch-all group, use for everything not listed below |
2 |
60 |
hmi_ground, hmi.lev0* |
3 |
60 |
aia_ground, ala_lev0* |
4 |
1 |
hmi_ground, hmi.tlm* |
5 |
1 |
aia_ground, ala_tlm* |
6 |
30 |
hmi_telem, on-site copy of telemetry data |
7 |
30 |
aia_telem, on-site copy of telemetry data |
8 |
60 |
hmi.lev1* backup of observables to be kept online |
9 |
365 |
aia.lev1* Prime AIA archive should be near-line as long as possible |
8 |
7 |
sha - Stanford Helioseismology Archive, large imported data series |
11 |
|
sid_awe - SID and AWESOME monitor data |