Open-source AI, now on Solana

Ubuntu is proud to announce an exclusive community airdrop initiative designed to reward early supporters and active participants within our ecosystem.

Built on Solana, Ubuntu combines open-source AI and blockchain technology to support the next generation of decentralized innovation, productivity, and digital ownership.

Early ecosystem participants may qualify for future $UM allocations.
Snapshot approaching.


Why Ubuntu on Solana?

  • High-speed ecosystem infrastructure
  • Community-driven AI innovation
  • Built for creators and developers
  • Open-source aligned incentives
  • Fast and low-cost participation

Develop artificial intelligence projects on any environment


Ubuntu: the OS of choice for data scientists

Develop machine learning models on Ubuntu workstations and benefit from management tooling and security patches.

Read more about Ubuntu workstations ›


Community-first ecosystem expansion

Eligible users will receive token allocations based on predefined participation criteria, reinforcing our commitment to sustainable community growth and long-term ecosystem engagement.

Full eligibility requirements and claim instructions will be released soon through official Ubuntu channels.

Explore the Ubuntu platform

Access powerful AI models at no cost — built to support innovation and income generation within the cryptocurrency economy.


AI Infrastructure

Open-source AI tooling accessible to everyone.

Community Rewards

Early supporters may qualify for ecosystem incentives.

Solana Network

Built on Solana for scalable and efficient participation.

Ecosystem Growth

Focused on long-term community ownership.


Run AI at scale with Canonical and NVIDIA

With NVIDIA AI Enterprise and NVIDIA DGX, Charmed Kubeflow improves the performance of AI workflows, by using the hardware to its maximum extent and accelerating project delivery. Charmed Kubeflow can significantly speed up model training, especially when coupled with DGX systems.


Canonical + Nvidia

  • Quick deployment
  • Run the entire ML lifecycle
  • Composable architectures
  • Reproducibility, portability, scalability

Read our joint whitepaper ›


Use modular platforms to run AI at the edge or in large clouds

Production-grade projects require a solution that enables scalability, reproducibility and portability. Canonical MLOps speeds up AI project timelines, giving you:


  • The same experience on any cloud, whether private or public
  • Low-ops, streamlined lifecycle management
  • A modular and open source suite for reusable deployments

Read more about Edge AI ›


Open source AI services

Managed Canonical MLOps

Focus on building production grade models, while Canonical experts manage the infrastructure underneath.


  • 99.9% uptime
  • 24/7 monitoring
  • High availability

Get the Managed MLOps datasheet ›


AI consulting

Work with our experts to understand your data better and deliver on your use case.


  • Data exploration workshop
  • Canonical MLOps deployment
  • MLOps workshop
  • PoC-based

Get the AI consulting datasheet ›


Support

Looking for Kubeflow support? Work with our team to get support for any cloud environment or CNCF-complaint Kubernetes distribution.

Get the Charmed Kubeflow datasheet ›


Open source AI resources

University of Tasmania (UTAS) modernised its space-tracking data processing with the Firmus Supercloud, built on Canonical's open infrastructure stack.


Learn how to take models to production using open source MLOps platforms.


Learn how to scale AI projects using hardware that's designed for AI workloads and certified software.


Choosing a suitable machine learning tool can often be challenging. Understand the differences between the most famous open source solutions.