Implemented and running fine.
Currently all weather checks are carried inside the RTS2 dome component. The nice thing about that is that there is a single instrument to fail and prevent the dome from closing. Also this single component can be easily tested and is known to run well. (Operating on FRAM for ages…) But it has the following drawbacks:
- it is hard to share weather information between more RTS2 setups. We have encountered this at BOOTES 1.
- it is hard to see which device blocks the observatory
- it is non trivial (requires C-source editing) to attach a new weather sensor to the observatory.
This drawbacks should be resolved by following code changes:
In short: Single devices provide their opinion about whether to open or not over normal RTS2 channels. Some may be setup as mandatory in the system configuration. Multiple-instance connections, originally rather theoretical, should be tested and habilitated.
Multiple central server
Single device needs to connect easily to multiple running instances of RTS2 (e.g. cloud sensor needs to connect with two RTS2 instances to provide information to both).
To do this, the following must be assured:
- a single device having two Rts2CentralConn instances (no major changes expected)
- proper device IDs obtained from multiple centralds (shall be in CentralConn, does not look like a big issue)
- proper logging (to both centrald) - some changes expected
- what to do in case of centrald failure (hook in device class, fill it as needed)
Centrald will hold a list of devices which are required for system operation. This list will hold names of devices which must be present to switch system to standby or on. If some of the devices is missing, centrald will reject any requests to switch state and will generate warning email.
Before system switches to a higher state, it will ask all devices for an update. Devices state can list that device block switching to standby or on. If such a bit is set at least on one device, centrald will reject mode switching and will record that mode switching was rejected. If all devices agree, centrald will switch state and distribute message about switched state to all clients connected.
If during request for state of the devices any connection sends a command to change state, the request will be canceled, and a new request (if needed) will be issued.
Centrald asks for state switching confirmation only if state is higher then the actual state (e.g. it ask switching from off to standby and on, and from standby to on). It does not ask any confirmation if request is to lower or same state.
When centrald starts, it has two options:
- if uptime of the whole system, as deterined by sysinfo call, is below a value specified in observatory / wait_after_reboot seconds, it will switch immediately to off if reboot_on is false and to on if reboot_on is true
- otherwise, system state will be set by reboot_on config entry (on if reboot_on is true, off it is false). System will then enter grace period and wait for other devices to connect. After this period expires, it will check the list of connected devices. If any required device is missing, it will switch to off and write a message to the log file.
This procedure is implemented in order to not to close dome roof on RTS2 restart.
Following solution was implemented and runs successfully on all systems:
- weather state each device have weather state. It can be either good or bad. This way any device may signal if weather is acceptable for observation or no.
- central server required device list so central server know which devices must be presents in order to turn central weather to good. If some devices are missing, central weather state is set to off.
- list of failed devices in central server, so user knows from monitor why central server is in bad weather state.
- mootd, which allows to connect two or more central server, and signals to all central server off states. If at least one central server is in hard off, mootd signals bad weather on all connected central servers.