sda.api.load_data_ovh#

File was erased with PR#33, so there might be some missing files.

See here spark-cleantech/sda#files for more details.

Especially, there was in the README.md the following note:

## Postgres credentials & connection

### Étapes pour configurer la connexion

  1. Création automatique du fichier `sda.json` :

    • Lors de la première exécution de votre application après avoir importé get_config depuis sda.misc.config, un fichier sda.json est automatiquement créé dans votre répertoire utilisateur.

    • Ce fichier contiendra les clés suivantes avec des valeurs vides :

    ```json {

    “DB_USER”: “”, “DB_PASSWORD”: “”, “DB_HOST”: “”, “DB_PORT”: “”, “DB_NAME”: “”

  1. Connexion au VPN

    • Le VPN utilisé est Surfshark

    • Il faudrait se connecter sur l’adresse IP dédiée 185.200.206.34 pour pouvoir accéder aux données du cloud

  1. Compléter les informations de connexion :

    • Ouvrez le fichier sda.json dans un éditeur de texte.

    • Remplissez les valeurs avec vos informations de connexion à la base de données(Les informations de la base de données sur le cloud):

    ```json {

    “DB_USER”: “user_name”, “DB_PASSWORD”: “password”, “DB_HOST”: “host_name”, “DB_PORT”: “Port”, “DB_NAME”: “database_name”

Functions#

load_data(instrument_code[, start_time, end_time])

Load data from a specific table in the database based on the instrument code and time range.

Module Contents#

sda.api.load_data_ovh.load_data(instrument_code, start_time=None, end_time=None)#

Load data from a specific table in the database based on the instrument code and time range.

Parameters:
  • instrument_code (str) – The code of the instrument used to determine the table and columns to query.

  • start_time (str, optional) – The start timestamp for filtering the data. Defaults to None.

  • end_time (str, optional) – The end timestamp for filtering the data. Defaults to None.

Returns:

A DataFrame containing the selected data from the specified time range and instrument.

Return type:

pandas.DataFrame

Raises:
  • ValueError

    • If the instrument code is not found in the table mapping.

    • If the database configuration is incomplete or contains invalid values.

  • ConnectionError

    • If the connection to the database fails.

    • If the SQL query fails to execute due to connection issues.

Notes

  • The function reads database configurations from ~/sda.json.

    It is essential that this file contains the following keys: - DB_USER: Database username. - DB_PASSWORD: Database password. - DB_HOST: Hostname or IP address of the database server. - DB_PORT: Port number the database server is listening on. - DB_NAME: Name of the database to connect to.

  • If neither start_time nor end_time is provided,

    the function returns all data from the specified table.

  • If only start_time is provided, the function returns data from start_time to the current time.

  • Ensure that the database server is accessible and the credentials provided are correct.

  • For more information on setting up the configuration file, visit:

    https://github.com/spark-cleantech/sda/blob/main/README.md#postgres-credentials–connection