Data & Integration Tools
Connect to REST APIs, query databases, and ingest files from external systems.
These tools connect MyAi to your existing enterprise stack — CRMs, ERPs, databases, file stores, and any system with a REST API or SQL interface.
api_client — REST/HTTP
The primary tool for outbound integrations. Use it to fetch data, submit information, or trigger actions in any third-party system.
Key Parameters
| Parameter | Required | Description |
|---|---|---|
credentials_artifact | Yes (for auth) | The artifact_id of an integration_credentials artifact storing base URL and auth config. |
method | Yes | HTTP method: GET, POST, PUT, PATCH, DELETE. |
endpoint | One of these | Relative path appended to the base URL in your credentials (e.g., /api/customers). |
url | One of these | Full absolute URL for ad-hoc calls not tied to a credentials artifact. |
body | No | Request body. Pass a Python dict for JSON — auto-serialized with correct headers. |
query_params | No | Dictionary of URL query parameters (e.g., {"limit": 10, "status": "active"}). |
headers | No | Additional HTTP headers (e.g., {"Accept": "application/xml"}). |
files | No | List of file parts for multipart/form-data uploads. Each needs field_name, bytes, filename, content_type. |
Authentication Patterns
The credentials_artifact supports:
- Basic Auth — username/password
- API Key — header or query parameter
- Bearer Token — OAuth tokens
Warning: Never embed API keys directly in function code. Always reference an
integration_credentialsartifact.
Example
def main(customer_id: str):
response = default_api.api_client(
credentials_artifact="crm-api-creds",
method="GET",
endpoint=f"/customers/{customer_id}",
headers={"Accept": "application/json"}
)
if response and response.get("status") == "success":
return response.get("data")
return None
sql_client — BigQuery, MySQL, PostgreSQL
Direct database interaction for "Query-in-Place" workflows. MyAi does not ingest your database — it executes queries and returns structured JSON.
Key Parameters
| Parameter | Required | Description |
|---|---|---|
credentials | Yes (MySQL/PG) | The artifact_id of an integration_credentials artifact with connection details. May be omitted for system BigQuery with service account access. |
operation | Yes | execute_sql, list_tables (schema discovery), or test_connection. |
sql | Yes (for execute) | The SQL query string. |
dataset | No | Database name (MySQL) or dataset name (BigQuery). |
max_rows | No | Maximum rows to return (default: 50, max: 500). |
Tip: Use
list_tablesandtest_connectionoperations to discover schema and verify connectivity before writing queries.
Example
def main(user_id: str):
result = default_api.sql_client(
operation="execute_sql",
credentials="warehouse-creds",
dataset="user_management_db",
sql=f"SELECT username, email, created_at FROM users WHERE id = '{user_id}'"
)
if result and result.get("status") == "success":
return result.get("data")
return None
file_processor — Ingestion & Conversion
Handles the "unstructured-to-structured" pipeline — pulling files from external sources and converting them into MyAi Artifacts.
Key Parameters
| Parameter | Required | Description |
|---|---|---|
operation | Yes | download or process — fetches a file from a URL and creates an artifact. |
url | Yes | https:// or gs:// URL. For SharePoint, use the Microsoft Graph API download URL. |
credentials_artifact | No | For authenticated downloads (e.g., secure SharePoint documents). |
source_id | No | Unique identifier for deduplication. If the same source_id was processed before, returns the cached artifact. |
filename | No | Override for filename. Useful for MIME type detection. |
artifact_id | No | To retrieve the raw bytes of an already-processed artifact (for re-upload via api_client). |
Supported Formats
PDF, XLSX, DOCX, images, and more — automatically converted into MyAi Artifacts.
Example: Download, Then Re-Upload
def main(report_url: str, upload_creds_id: str):
# 1. Download external file as a MyAi artifact
download = default_api.file_processor(
operation="download",
url=report_url,
filename="monthly_report.pdf"
)
artifact_id = download.get("artifact_id")
# 2. Retrieve the bytes
file_data = default_api.file_processor(
operation="download",
artifact_id=artifact_id
)
# 3. Upload to another system
default_api.api_client(
credentials_artifact=upload_creds_id,
method="POST",
endpoint="/upload-reports",
files=[{
"field_name": "report_file",
"bytes": file_data.get("bytes"),
"filename": "monthly_report.pdf",
"content_type": "application/pdf"
}]
)
Learn More
- Functions — Write custom Python logic that orchestrates these data tools.
- Integrations — See the full connector catalog and supported systems.