MLOps for humansΒΆ

omega-ml is the innovative Python-native MLOps platform that provides a scalable development and runtime environment for your AI systems. Works from laptop to cloud or on-premises.

Note

Release 0.17.0 introduces Generative AI as a first-class citizen and a new structure in this documentation to reflect this.

Read more

While it has always been possible to use generative AI models with omega-ml, this release introduces a new set of components and APIs to match new use cases. This includes explicit support for RAG pipelines, a completion API that works with 3rd-party chat clients, and the ability to stream responses in the REST API.

There are now dedicated sections for Classic ML and Generative AI, which cover the respective workflows and capabilities of omega-ml. The common aspects are covered in Task Automation, Deployment and Operations and Command-line interface. The Advanced features section covers advanced topics that are also relevant to both classic ML and generative AI workflows.

The top-level sections of the documentation are now better organized into User Guide, Deploying omega-ml, Extending omega-ml, Reference. Overall this structure allows for a more focused and organized documentation experience, making it easier to find the information you need for your specific use case. The actual content for classic ML and the other sections did not change.