models Atomica - Azure/azureml-assets GitHub Wiki
ATOMICA is a hierarchical geometric deep learning model trained on over 2.1 million molecular interaction interfaces. It represents interaction complexes using an all-atom graph structure, where nodes correspond to atoms or grouped chemical blocks, and edges reflect both intra- and intermolecular spatial relationships. The model uses SE(3)-equivariant message passing to ensure that learned embeddings are invariant to rotations and translations of molecular structures. The architecture produces embeddings at multiple scales—atom, block, and graph—that capture fine-grained structural detail and broader functional motifs. Project Website
The model has been pretrained on 2,105,703 molecular interaction interfaces from the Protein Data Bank and Cambridge Structural Database, spanning multiple interaction types including protein-small molecule, protein-ion, small molecule-small molecule, protein-protein, protein-peptide, protein-RNA, protein-DNA, and nucleic acid-small molecule complexes. The pretraining strategy involves denoising transformations (rotation, translation, torsion) and masked block-type prediction, enabling the model to learn chemically grounded, transferable features. ATOMICA supports downstream tasks via plug-and-play adaptation with task-specific heads, including binding site prediction and protein interface fingerprinting. Project Website | GitHub
ATOMICA employs a hierarchical geometric deep learning architecture with the following key components:
- An all-atom graph representation of molecular complexes where nodes correspond to atoms or chemical blocks
- SE(3)-equivariant message passing to ensure rotational and translational invariance
- Multi-scale embeddings at atom, block, and graph levels
- Pretraining via denoising transformations and masked block-type prediction
Single PDB Input:
data = {
"input_data": {
"columns": ["pdb_data"],
"index": [0],
"data": [
["https://path/to/local/protein.jsonl.gz"]
]
}
}
Multiple PDB Input:
data = {
"input_data": {
"columns": ["pdb_data"],
"index": [0, 1],
"data": [
["https://path/to/local/protein1.jsonl.gz"],
["https://path/to/local/protein2.jsonl.gz"]
]
}
}
Output Sample:
{
"predictions": [
{
"graph_embedding": [0.123456, -0.234567, ...],
"block_embedding": [[0.345678, -0.456789, ...], ...],
"atom_embedding": [[0.567890, -0.678901, ...], ...]
}
]
}
Output Processing Example:
def process_predictions(predictions, is_batch=False):
"""Process predictions in a consistent way."""
if not predictions:
print("No predictions found")
return
# Extract predictions from response
if isinstance(predictions, dict) and 'predictions' in predictions:
predictions = predictions['predictions']
# Handle both single and batch predictions
if is_batch:
if isinstance(predictions, list):
# Multiple predictions
for i, pred in enumerate(predictions, 1):
print(f"\nPDB {i}:")
_display_embeddings(pred)
elif isinstance(predictions, dict):
# Single prediction in batch format
_display_embeddings(predictions)
else:
print(f"Error: Unexpected predictions type: {type(predictions)}")
else:
# Single prediction format
if isinstance(predictions, list) and len(predictions) > 0:
# Single prediction in list
_display_embeddings(predictions[0])
elif isinstance(predictions, dict):
# Direct dictionary format
_display_embeddings(predictions)
else:
print(f"Error: Invalid prediction format: {type(predictions)}")
def _display_embeddings(pred):
"""Display embeddings in a concise format."""
if not isinstance(pred, dict):
print(f"Error: Invalid prediction format: {type(pred)}")
return
for embed_type in ["graph_embedding", "block_embedding", "atom_embedding"]:
if embed_type in pred:
embedding = pred[embed_type]
if isinstance(embedding, list):
arr = np.array(embedding)
print(f"\n{embed_type}:")
print(f"Shape: {arr.shape}")
print(f"Mean: {arr.mean():.6f}, Std: {arr.std():.6f}")
print(f"Range: [{arr.min():.6f}, {arr.max():.6f}]")
else:
print(f"\nWarning: {embed_type} not found in predictions")
print("Available keys:", list(pred.keys()) if isinstance(pred, dict) else "Not a dictionary")
- Supported Data Input Format
-
Input Format: The model accepts molecular structure data in PDB format, specifically processed as compressed JSONL files (.jsonl.gz). Each file contains molecular interaction complex graphs with atomic coordinates and structural information.
-
Input Methods: The model supports:
- URLs pointing to remote .jsonl.gz files
- Base64-encoded molecular structure data
-
Output Format: The model generates multi-scale molecular embeddings in three hierarchical levels:
- Graph embedding: Overall molecular complex representation
- Block embedding: Chemical block-level features
- Atom embedding: Individual atomic-level representations
-
Data Sources and Technical Details: For comprehensive information about training datasets, model architecture, and validation results, refer to the official ATOMICA repository
Version: 2
task : embeddings
industry : health-and-life-sciences
Preview
`licenseDescription : MIT License
Copyright (c) 2024 Artificial Intelligence for Medicine and Science @ Harvard
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
inference_supported_envs : ['hf']
license : mit
author : Mims Harvard
hiddenlayerscanned
SharedComputeCapacityEnabled
inference_compute_allow_list : ['Standard_NC4as_T4_v3', 'Standard_NC8as_T4_v3', 'Standard_NC16as_T4_v3', 'Standard_NC64as_T4_v3', 'Standard_NC6s_v3', 'Standard_NC12s_v3', 'Standard_NC24s_v3', 'Standard_NC24ads_A100_v4', 'Standard_NC48ads_A100_v4', 'Standard_NC96ads_A100_v4', 'Standard_ND96asr_v4', 'Standard_ND96amsr_A100_v4', 'Standard_ND40rs_v2', 'Standard_NC40ads_H100_v5', 'Standard_NC80adis_H100_v5', 'Standard_ND96isr_H100_v5']`
View in Studio: https://ml.azure.com/registries/azureml/models/Atomica/version/2
License: mit
inference-min-sku-spec: 4|1|28|64
inference-recommended-sku: Standard_NC4as_T4_v3, Standard_NC8as_T4_v3, Standard_NC16as_T4_v3, Standard_NC64as_T4_v3, Standard_NC6s_v3, Standard_NC12s_v3, Standard_NC24s_v3, Standard_NC24ads_A100_v4, Standard_NC48ads_A100_v4, Standard_NC96ads_A100_v4, Standard_ND96asr_v4, Standard_ND96amsr_A100_v4, Standard_ND40rs_v2, Standard_NC40ads_H100_v5, Standard_NC80adis_H100_v5, Standard_ND96isr_H100_v5
languages: en
SharedComputeCapacityEnabled: True