Philipp Moritz, Hao Chen, Tyler Griggs, and the SkyRL Team

🗓️ Posted: February 8, 2026

<aside>

We are happy to announce SkyRL tx v0.3.0!

SkyRL tx is a LoRA native training and inference engine that implements the Tinker API and allows people to set up a Tinker-like service running on their own hardware.

In this release, we add expert parallel support, DeepSeekV3 model support (e.g. the GLM 4.7 Flash model), a number of optimizations for long sequence lengths, and some smaller features and performance optimizations as well as a few bug fixes.

</aside>

<aside> 📢

We gave a talk on SkyRL tx: A unified training and inference engine at this year’s Ray Summit, check out the recording and slides.

</aside>

Updates

Bugfixes

There are a number of exciting in-flight PRs, like support for the full GLM 4.7 model #989, support for mHC #1008, support for running on a Ray cluster #955 and support for the Olmo 3 model #1043. Thanks to Tanmay, Han-Ju Chen and Jiang for the contributions!

As always, we welcome more contributions!