Provided Hooks
Table of Contents
Overview
On this page, we give more details on the ‘Provided Hooks’ i.e. those workflows/functions that have been pre-developed and pre-deployed by Destination Earth Data Lake.
The Hook service provides ready-to-use high level serverless workflows and functions preconfigured to efficiently access and manipulate Destination Earth Data Lake (DEDL) data. A growing number of workflows and functions will provide on-demand capabilities for the diverse satellite data analysis needs.
The full list of available Hooks are seen in the Hook Descriptions section further below.
Note
The main processor that will be of use to Destination Earth Data Lake users is the data-harvest
processor; with this you will be able to download data of interest to your S3 Object Storage.
The collection of Jupyter Notebooks examples on how to use the DestinE Data Lake services can be found at Destination Earth on Github.
Getting Started
Accessing the Jupyter Notebook Hook Tutorial
The simplest way to run the Hook Tutorial is to use the Destination Earth Data Lake (DEDL) - JupyterHub - Stack Service which has already Git Cloned the Github/destination-earth repository.
Alternatively you can access the Github/destination-earth HOOK folder yourself, then click on the Hook Tutorial Notebook
For more information on (DEDL) Stack Services refer to Run a Notebook on JupyterHub
The notebook is ready to use and by default will create a request to execute the data-harvest hook.
The notebook can be used with an optional .env_tutorial file that can load up environment variables for use in the notebook (See the README.md)
Install python package requirements
# Note: The destinelab python package (which helps with authentication) is available already if you are using Python DEDL kernel
# Otherwise, the destinelab python package can be installed by uncommenting the following line
# For the importing of environment variables using the load_dotenv(...) command
%pip install python-dotenv
# for example code navigating private S3 compatible storage (PRIVATE bucket storage)
%pip install boto3
##### OUTPUT FROM ABOVE CODE #####
Requirement already satisfied: python-dotenv in /opt/conda/envs/python_dedl/lib/python3.11/site-packages (1.0.1)
Note: you may need to restart the kernel to use updated packages.
Requirement already satisfied: boto3 in /opt/conda/envs/python_dedl/lib/python3.11/site-packages (1.26.76)
Requirement already satisfied: botocore<1.30.0,>=1.29.76 in /opt/conda/envs/python_dedl/lib/python3.11/site-packages (from boto3) (1.29.76)
Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /opt/conda/envs/python_dedl/lib/python3.11/site-packages (from boto3) (1.0.1)
Requirement already satisfied: s3transfer<0.7.0,>=0.6.0 in /opt/conda/envs/python_dedl/lib/python3.11/site-packages (from boto3) (0.6.2)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /opt/conda/envs/python_dedl/lib/python3.11/site-packages (from botocore<1.30.0,>=1.29.76->boto3) (2.9.0)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /opt/conda/envs/python_dedl/lib/python3.11/site-packages (from botocore<1.30.0,>=1.29.76->boto3) (1.26.19)
Requirement already satisfied: six>=1.5 in /opt/conda/envs/python_dedl/lib/python3.11/site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.30.0,>=1.29.76->boto3) (1.16.0)
Note: you may need to restart the kernel to use updated packages.
Import packages and load optional environment variables from file
import os
import json
import requests
from dotenv import load_dotenv
from getpass import getpass
import destinelab as destinelab
# Load (optional) notebook specific environment variables from .env_tutorial
load_dotenv("./.env_tutorial", override=True)
##### OUTPUT FROM ABOVE CODE #####
True
Enter your DESP username and password (This will get a token allowing you to interact with the
Hook API
- OnDemand Processing API)
# By default users should use their DESP credentials to get an Access_token
# This token is added as an Authorisation Header when interacting with the Hook Service API
# Enter DESP credentials.
DESP_USERNAME = input("Please input your DESP username or email: ")
DESP_PASSWORD = getpass("Please input your DESP password: ")
token = destinelab.AuthHandler(DESP_USERNAME, DESP_PASSWORD)
access_token = token.get_token()
# Check the status of the request
if access_token is not None:
print("DEDL/DESP Access Token Obtained Successfully")
# Save API headers
api_headers = {"Authorization": "Bearer " + access_token}
else:
print("Failed to Obtain DEDL/DESP Access Token")
##### OUTPUT FROM ABOVE CODE #####
Please input your DESP username or email: your.desp@emailaddress
Please input your DESP password: ········
Response code: 200
DEDL/DESP Access Token Obtained Successfully
Setup Static Variables (sets the root url of the
Hook API
- OnDemand Processing API.https://odp.data.destination-earth.eu/odata/v1/
)
# Hook service url (ending with odata/v1/ - e.g. https://odp.data.destination-earth.eu/odata/v1/)
hook_service_root_url = "https://odp.data.destination-earth.eu/odata/v1/"
List Available workflows (Gives a list of Hooks - Names and Display Names. Also prints out json response)
# Send request and return json object listing all provided workfows, ordered by Id
result = requests.get(
f"{hook_service_root_url}Workflows?$orderby=Id asc", headers=api_headers
).json()
print("List of available DEDL provided Hooks")
for i in range(len(result["value"])):
print(
f"Name:{str(result['value'][i]['Name']).ljust(20, ' ')}DisplayName:{str(result['value'][i]['DisplayName'])}"
) # print JSON string
# Print result JSON object: containing provided workflow list
workflow_details = json.dumps(result, indent=4)
print(workflow_details)
##### OUTPUT FROM ABOVE CODE #####
List of available DEDL provided Hooks
Name:card_bs DisplayName:Sentinel-1: Terrain-corrected backscatter
Name:lai DisplayName:Sentinel-2: SNAP-Biophysical
Name:odp-test DisplayName:ODP Test
Name:card_cohinf DisplayName:Sentinel-1 Coherence/Interferometry
Name:c2rcc DisplayName:Sentinel-2: C2RCC
Name:copdem DisplayName:Copernicus DEM Mosaic
Name:dedl_hello_world DisplayName:DEDL Hello World
Name:data-harvest DisplayName:Data harvest
Name:sen2cor DisplayName:Sentinel-2: Sen2Cor
Name:maja DisplayName:Sentinel-2: MAJA Atmospheric Correction
... json with more detail of each processor
Now we choose a workflow so that we can see the details of how to execute it (default is ‘data-harvest’). The json response shows you the options available.
# Select workflow : defaults to data-harvest
workflow = os.getenv("HOOK_WORKFLOW", "data-harvest")
print(f"workflow: {workflow}")
# Send request
result = requests.get(
f"{hook_service_root_url}Workflows?$expand=WorkflowOptions&$filter=(Name eq '{workflow}')",
headers=api_headers,
).json()
workflow_details = json.dumps(result, indent=4)
print(workflow_details) # print formatted workflow_details, a JSON string
##### OUTPUT FROM ABOVE CODE #####
workflow: data-harvest
{
"@odata.context": "$metadata#Workflows/$entity",
"value": [
{
"Id": "11",
"Uuid": null,
"Name": "data-harvest",
"DisplayName": "Data harvest",
"Documentation": null,
"Description": "Data-harvest is a workflow allows to download data from external sources. It requires URL to the external catalogue, credentials and data to download. The workflow is mainly used to download data from HDA (https://hda.data.destination-earth.eu/) using STAC.",
"InputProductType": null,
"InputProductTypes": [],
"OutputProductType": null,
"OutputProductTypes": [],
"WorkflowVersion": "0.0.1",
"WorkflowOptions": [
... more detail
Set the Name of the order (order_name). This allows us to easily identify the Order in following steps.
# Here we set the variable order_name, this will allow us to:
# Easily identify the running process (e.g. when checking the status)
# order_name is added as a suffix to the order 'Name'
order_name = os.getenv("HOOK_ORDER_NAME") or input("Name your order: ")
print(f"order_name:{order_name}")
##### OUTPUT FROM ABOVE CODE #####
order_name:jess-20241118-2
Next (optional) we set the PRIVATE bucket details and access_key, secret_key (only necessary if you want to use a private bucket for output). By default we are using TEMPORARARY storage in this tutorial.
# Output storage - Islet service
# Note: If you want the output to go to your own PRIVATE bucket rather than TEMPORARY storage (expires after 2 weeks),
# i) This Configuration will need to be updated with your output_bucket, output_storage_access_key, output_secret_key, output_prefix
# ii) You will need to change the output_storage in the order to PRIVATE and add the necessary source_ parameters (see workflow options and commented example)
# URL of the S3 endpoint in the Central Site (or lumi etc.)
output_storage_url = "https://s3.central.data.destination-earth.eu"
# output_storage_url = "https://s3.lumi.data.destination-earth.eu"
# Name of the object storage bucket where the results will be stored.
output_bucket = os.getenv("HOOK_OUTPUT_BUCKET", "your-bucket-name")
print(f"output_bucket : {output_bucket}")
# Islet object storage credentials (openstack ec2 credentials)
output_storage_access_key = os.getenv("HOOK_OUTPUT_STORAGE_ACCESS_KEY", "your-access-key")
output_storage_secret_key = os.getenv("HOOK_OUTPUT_STORAGE_SECRET_KEY", "your-secret-key")
print(f"output_storage_access_key: {output_storage_access_key}")
print(f"output_storage_secret_key: {output_storage_secret_key}")
# This is the name of the folder in your output_bucket where the output of the hook will be stored.
# Here we concatenate 'dedl' with the 'workflow' and 'order_name'
output_prefix = f"dedl-{workflow}-{order_name}"
print(f"output_prefix : {output_prefix}")
##### OUTPUT FROM ABOVE CODE #####
output_bucket : your_bucket-name
output_storage_access_key: your_access_key
output_storage_secret_key: your_secret_key
output_prefix : dedl-data-harvest-jess-20241118-2
Next we set some obligatory parameters for the Order (in particular the collection_id and data_id. Also by default we see ‘TEMPORARY’ storage is set and the source_type is ‘DESP’ - i.e. simplified configuration using DESP credentials, which uses DEDL HDA component in the background)
# URL of the STAC server where your collection/item can be downloaded
stac_hda_api_url = "https://hda.data.destination-earth.eu/stac"
# Note: The data (collection_id and data_id) will have been previously discovered and searched for
# Set collection where the item can be found : defaults to example for data-harvest
collection_id = os.getenv("HOOK_COLLECTION_ID", "EO.ESA.DAT.SENTINEL-2.MSI.L1C")
print(f"STAC collection url: {stac_hda_api_url}/collections/{collection_id}")
# Set the Item to Retrieve : defaults to example for data-harvest. If Multiple Values, provide comma separated list
data_id = os.getenv("HOOK_DATA_ID", "S2A_MSIL1C_20230910T050701_N0509_R019_T47VLH_20230910T074321.SAFE")
print(f"data_id: {data_id}")
identifier_list = [data_id_element.strip() for data_id_element in data_id.split(',')]
# Get boolean value from String, default (False)
is_private_storage = os.getenv("HOOK_IS_PRIVATE_STORAGE", "False") == "True"
print(f"is_private_storage: {is_private_storage}")
# we use source_type to add DESP or EXTERNAL specific configuration
source_type = os.getenv("HOOK_SOURCE_TYPE", "DESP")
print(f"source_type: {source_type}")
if source_type == "EXTERNAL":
EXTERNAL_USERNAME = os.getenv("HOOK_EXTERNAL_USERNAME", "EXTERNAL_USERNAME")
EXTERNAL_PASSWORD = os.getenv("HOOK_EXTERNAL_PASSWORD", "EXTERNAL_PASSWORD")
EXTERNAL_TOKEN_URL = os.getenv("HOOK_EXTERNAL_TOKEN_URL", "EXTERNAL_TOKEN_URL")
EXTERNAL_CLIENT_ID = os.getenv("HOOK_EXTERNAL_CLIENT_ID", "EXTERNAL_CLIENT_ID")
##### OUTPUT FROM ABOVE CODE #####
STAC collection url: https://hda.data.destination-earth.eu/stac/collections/EO.ESA.DAT.SENTINEL-2.MSI.L1C
data_id: S2A_MSIL1C_20230910T050701_N0509_R019_T47VLH_20230910T074321.SAFE
is_private_storage: False
source_type: DESP
Next we execute the order. We can note that normally hooks have a simplified configuration using the DESP source_type (which gets data from the DEDL HDA component and uses DESP credentials)
########## BUILD ORDER BODY : CHOOSE PRIVATE or TEMPORARY output_storage ##########
# Initialise the order_body
order_body_custom_bucket = {
"Name": "Tutorial " + workflow + " - " + order_name,
"WorkflowName": workflow,
"IdentifierList": identifier_list,
"WorkflowOptions": [],
}
##### Configure PRIVATE OR TEMPORARY STORAGE #####
if is_private_storage:
print("##### Preparing Order Body for PRIVATE STORAGE #####")
order_body_custom_bucket["WorkflowOptions"].extend(
[
{"Name": "output_storage", "Value": "PRIVATE"},
{"Name": "output_s3_access_key", "Value": output_storage_access_key},
{"Name": "output_s3_secret_key", "Value": output_storage_secret_key},
{"Name": "output_s3_path", "Value": f"s3://{output_bucket}/{output_prefix}"},
{"Name": "output_s3_endpoint_url", "Value": output_storage_url}
]
)
else:
print("##### Preparing Order Body for TEMPORARY STORAGE #####")
order_body_custom_bucket["WorkflowOptions"].extend(
[
{"Name": "output_storage", "Value": "TEMPORARY"},
]
)
##### Configure SOURCE_TYPE and associated parameters #####
if source_type == "DESP":
# Using DESP credentials is standard way of executing Hooks.
print("##### Preparing Order Body for access to DEDL HDA using DESP Credentials #####")
order_body_custom_bucket["WorkflowOptions"].extend(
[
{"Name": "source_type", "Value": "DESP"},
{"Name": "desp_source_username", "Value": DESP_USERNAME},
{"Name": "desp_source_password", "Value": DESP_PASSWORD},
{"Name": "desp_source_collection", "Value": collection_id}
]
)
elif source_type == "EXTERNAL":
# Build your order body : Example using EXTERNAL source type and source_catalogue_api_type STAC.
# This would allow you to access products directly from a configured STAC server
# Here we show an example configuration of a STAC server with OIDC security, that could be adapted to your needs (change urls, etc)
# This is shown for example purposes only. The standard way of configuring is with DESP source_type seen above.
print("##### Preparing Order Body for access to EXTERNAL STAC Server using EXTERNAL Credentials #####")
order_body_custom_bucket["WorkflowOptions"].extend(
[
{"Name": "source_type", "Value": "EXTERNAL"},
{"Name": "source_catalogue_api_url", "Value": stac_hda_api_url},
{"Name": "source_catalogue_api_type", "Value": "STAC"},
{"Name": "source_token_url", "Value": EXTERNAL_TOKEN_URL},
{"Name": "source_grant_type", "Value": "PASSWORD"},
{"Name": "source_auth_header_name", "Value": "Authorization"},
{"Name": "source_username", "Value": EXTERNAL_USERNAME},
{"Name": "source_password", "Value": EXTERNAL_PASSWORD},
{"Name": "source_client_id", "Value": EXTERNAL_CLIENT_ID},
{"Name": "source_client_secret", "Value": ""},
{"Name": "source_catalogue_collection", "Value": collection_id}
]
)
else:
print("source_type not equal to DESP or EXTERNAL")
########## ADDITIONAL OPTIONS ##########
additional_options = []
# Checks environment variables for the form HOOK_ADDITIONAL1="NAME=12345;VALUE=abcdef"
for env_key, env_value in os.environ.items():
if env_key.startswith('HOOK_ADDITIONAL'):
#print(f"{env_key}: {env_value}")
parts = env_value.split(';')
# Extract the name and value
name = parts[0].split('=')[1]
value = parts[1].split('=')[1]
value_type = parts[2].split('=')[1]
additional_options.append({"Name": name, "Value": value if value_type == 'str' else int(value)})
print(f"addditional_options:{additional_options}")
if additional_options:
print("Adding additional_options")
order_body_custom_bucket["WorkflowOptions"].extend(additional_options)
########## BUILD ORDER BODY : END ##########
# Send order
order_request = requests.post(
hook_service_root_url + "BatchOrder/OData.CSC.Order",
json.dumps(order_body_custom_bucket),
headers=api_headers,
).json()
# If code = 201, the order has been successfully sent
# Print order_request JSON object: containing order_request details
order_reques_details = json.dumps(order_request, indent=4)
print(order_reques_details)
order_id = order_request['value']['Id']
print(f"order 'Id' from order_request: {order_id}")
##### OUTPUT FROM ABOVE CODE #####
##### We see that the order is 'queued' and has an order 'id' 27830
##### Preparing Order Body for TEMPORARY STORAGE #####
##### Preparing Order Body for access to DEDL HDA using DESP Credentials #####
addditional_options:[]
{
"@odata.context": "#metadata/Odata.CSC.BatchOrder",
"value": {
"Name": "Tutorial data-harvest - jess-20241118-2",
"Priority": 1,
"WorkflowName": "data-harvest",
"NotificationEndpoint": null,
"NotificationEpUsername": null,
"NotificationStatus": null,
"WorkflowOptions": [
{
"Name": "platform",
"Value": "dedl"
},
{
"Name": "version",
"Value": "0.0.1"
},
{
"Name": "output_storage",
"Value": "TEMPORARY"
},
{
"Name": "source_type",
"Value": "DESP"
},
{
"Name": "desp_source_username",
"Value": "your.email@address"
},
{
"Name": "desp_source_password",
"Value": "gAAAAABnO0YNa7ARI0ZuGY-..."
},
{
"Name": "desp_source_collection",
"Value": "EO.ESA.DAT.SENTINEL-2.MSI.L1C"
}
],
"Id": 27830,
"Status": "queued",
"SubmissionDate": "2024-11-18T13:50:05.231Z",
"EstimatedDate": null,
"KeycloakUUID": "a619e269-b4b9-49fc-ae7b-dc1e5fb6fe81",
"WorkflowId": 17,
"WorkflowDisplayName": "Data harvest",
"WorkflowVersion": "0.0.1"
}
}
order 'Id' from order_request: 27830
Now that the order has been made, we can check the status of the order (queue, in_progress, completed)
# ProductionOrders endpoint gives status of orders (only with one item attached)
# Otherwise use BatchOrder(XXXX)/Products endpoint to get status of individual items associated with order
if len(identifier_list) == 1:
order_status_url = f"{hook_service_root_url}ProductionOrders"
params = {"$filter": f"id eq {order_id}"}
order_status_response = requests.get(order_status_url, params=params, headers=api_headers).json()
print(json.dumps(order_status_response, indent=4))
# Get Status of all items of an order in this way
order_status_response = requests.get(
f"{hook_service_root_url}BatchOrder({order_id})/Products",
headers=api_headers,
).json()
print(json.dumps(order_status_response, indent=4))
##### OUTPUT FROM ABOVE CODE #####
##### After a short period the status should move from 'queued' to 'in progress' to 'completed'
##### When the status is 'completed' and you are using temporary storage, you will find a temporary download link for your product
{
"@odata.context": "$metadata#ProductionOrder/$entity",
"value": [
{
"Id": "27830",
"Status": "completed",
"StatusMessage": "requested output product is available",
"SubmissionDate": "2024-11-18T13:50:05.231Z",
"Name": "Tutorial data-harvest - jess-20241118-2",
"EstimatedDate": "2024-11-18T14:00:19.801Z",
"InputProductReference": {
"Reference": "S2A_MSIL1C_20230910T050701_N0509_R019_T47VLH_20230910T074321.SAFE",
"ContentDate": null
},
"WorkflowOptions": [
{
"Name": "platform",
"Value": "dedl"
},
{
"Name": "version",
"Value": "0.0.1"
},
{
"Name": "output_storage",
"Value": "TEMPORARY"
},
{
"Name": "source_type",
"Value": "DESP"
},
{
"Name": "desp_source_username",
"Value": "your.desp@emailaddress"
},
{
"Name": "desp_source_collection",
"Value": "EO.ESA.DAT.SENTINEL-2.MSI.L1C"
}
],
"WorkflowName": "data-harvest",
"WorkflowId": 11,
"Priority": 1,
"NotificationEndpoint": null,
"NotificationEpUsername": null,
"NotificationStatus": null
}
]
}
{
"@odata.context": "#metadata/OData.CSC.BatchorderItem",
"value": [
{
"Id": 36591,
"BatchOrderId": 27830,
"InputProductReference": "S2A_MSIL1C_20230910T050701_N0509_R019_T47VLH_20230910T074321.SAFE",
"SubmissionDate": "2024-11-18T13:50:05.158Z",
"Status": "completed",
"ProcessedName": "S2A_MSIL1C_20230910T050701_N0509_R019_T47VLH_20230910T074321.SAFE",
"ProcessedSize": 779499257,
"OutputUUID": null,
"StatusMessage": "Processing finished successfully",
"CompletedDate": "2024-11-18T13:54:25.439Z",
"DownloadLink": "https://s3.central.data.destination-earth.eu/swift/v1/tmp-storage/20241118_36591_...",
"NotificationStatus": null
}
]
}
Following this we have some code that lists files in your PRIVATE storage (if this option was selected)
# PRIVATE STORAGE: Prints contents of Private Bucket
import boto3
if is_private_storage:
s3 = boto3.client(
"s3",
aws_access_key_id=output_storage_access_key,
aws_secret_access_key=output_storage_secret_key,
endpoint_url=output_storage_url,
)
paginator = s3.get_paginator("list_objects_v2")
pages = paginator.paginate(Bucket=output_bucket, Prefix=output_prefix + "/")
for page in pages:
try:
for obj in page["Contents"]:
print(obj["Key"])
except KeyError:
print("No files exist")
exit(1)
##### OUTPUT FROM ABOVE CODE #####
... nothing because we used temporary storage
This section
checks whether we are using TEMPORARY storage,
gives the status with DownloadLink and also the
option to download the file(s) programmatically.
# List order items within a production order
# When the output_storage is of type TEMPORARY we can get a DownloadLink from the following code (Can also optionally download items here in code with the flag is_download_products)
# If TEMPORARY storage
if not is_private_storage:
# Set to True to download products at the same level as the notebook file. File name will be in format "output-{workflow}-{order_id}-{product_id}.zip"
is_download_products = False
# Get Status of all items of an order in this way
product_status_response = requests.get(
f"{hook_service_root_url}BatchOrder({order_id})/Products",
headers=api_headers,
).json()
print(json.dumps(product_status_response, indent=4))
if is_download_products:
is_all_products_completed = True
# We only attempt to download products when each of the items is in complete status.
for i in range(len(product_status_response["value"])):
product_id = product_status_response["value"][i]["Id"]
product_status = product_status_response["value"][i]["Status"]
if product_status != "completed":
is_all_products_completed = False
# Can download if all products completed
if is_all_products_completed:
for i in range(len(product_status_response["value"])):
product_id = product_status_response["value"][i]["Id"]
product_status = product_status_response["value"][i]["Status"]
# Infer the url of the product
url_product = f"{hook_service_root_url}BatchOrder({order_id})/Product({product_id})/$value"
print(f"url_product: {url_product}")
# Download the product
r = requests.get(
url_product, headers=api_headers, allow_redirects=True
)
product_file_name = f"output-{workflow}-{order_id}-{product_id}.zip"
open(product_file_name, "wb").write(r.content)
print(f"Download Complete: product_file_name: {product_file_name}")
else:
print(f"Status for order:{order_id} - At least one of the products does not have the status of 'completed'.")
##### OUTPUT FROM ABOVE CODE - Status Queued #####
{
"@odata.context": "#metadata/OData.CSC.BatchorderItem",
"value": [
{
"Id": 36591,
"BatchOrderId": 27830,
"InputProductReference": "S2A_MSIL1C_20230910T050701_N0509_R019_T47VLH_20230910T074321.SAFE",
"SubmissionDate": "2024-11-18T13:50:05.158Z",
"Status": "queued",
"ProcessedName": null,
"ProcessedSize": null,
"OutputUUID": null,
"StatusMessage": "",
"CompletedDate": null,
"DownloadLink": null,
"NotificationStatus": null
}
]
}
##### OUTPUT FROM ABOVE CODE - Status Complete #####
{
"@odata.context": "#metadata/OData.CSC.BatchorderItem",
"value": [
{
"Id": 36591,
"BatchOrderId": 27830,
"InputProductReference": "S2A_MSIL1C_20230910T050701_N0509_R019_T47VLH_20230910T074321.SAFE",
"SubmissionDate": "2024-11-18T13:50:05.158Z",
"Status": "completed",
"ProcessedName": "S2A_MSIL1C_20230910T050701_N0509_R019_T47VLH_20230910T074321.SAFE",
"ProcessedSize": 779499257,
"OutputUUID": null,
"StatusMessage": "Processing finished successfully",
"CompletedDate": "2024-11-18T13:54:25.439Z",
"DownloadLink": "https://s3.central.data.destination-earth.eu/swift/v1/tmp-storage/20241118_...",
"NotificationStatus": null
}
]
}
Hook Descriptions
The following table shows pre-developed and pre-deployed Hooks made available by Destination Earth Data Lake. Data Harvest, which will be the most often used processor, is singled out with the bright green color:
Data-harvest is a workflow that allows users to download data from external sources. It requires a URL to the external catalogue, credentials and data to download. The workflow is mainly used to download data from HDA (https://hda.data.destination-earth.eu/) using STAC.