Open source tool for running large language models like BLOOM-176B collaboratively — you load a small part of the model, then team up with people serving the other parts to run inference or fine-tuning.
Discover similar tools to enhance your workflow
fast.ai is a non-profit research group focused on deep learning and artificial intelligence. They...
Serverless GPU inference for ML models Pay-per-millisecond API to run ML in production.
Helicon is the platform that combines event stream processing and AI. It enables you to simplify...