Researcher turns gpt-oss-20b into a non-reasoning base model

Carl Franzen 2025-08-15 19:19:00

Want smarter insights in‌ your inbox? Sign up​ for our weekly newsletters ⁢to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


OpenAI’s new, powerful open⁤ weights⁤ AI ⁣large language model (LLM) family gpt-oss was released less than two weeks ago under a permissive Apache ‌2.0 license — the company’s⁤ first open weights model launch as GPT-2 in 2019 — but developers ‍outside the company are already reshaping it.

One of the most striking examples⁣ comes from Jack Morris, a Cornell Tech PhD student, former Google Brain​ Resident, and current researcher ​at Meta, who this week unveiled‍ gpt-oss-20b-base, his own ⁤reworked version of OpenAI’s smaller gpt-oss-20B model, which removes⁣ the “reasoning” behavior of the model and returns it to a pre-trained “base” version that offers faster, ⁣freer, more uncensored and ‍unconstrained responses.

The model ‌is available now ‍on Hugging Face under a ‍ permissive MIT License,allowing ⁢it ‍to be used for both additional research and commercial applications.⁣

How gpt-oss-20B-base is‌ different than⁢ OpenAI’s gpt-oss models

To understand what‌ Morris did, ⁢it helps to know the difference between OpenAI’s release and what AI ‍researchers call a “base model.”


AI Scaling Hits Its limits

Power caps, rising⁤ token costs, and​ inference delays ​are reshaping enterprise AI. Join our⁤ exclusive salon to discover how top teams are:

  • Turning⁤ energy into​ a strategic‍ advantage
  • Architecting efficient inference for real throughput gains
  • Unlocking competitive⁣ ROI wiht sustainable AI systems
  • Secure your spot ⁣to ⁢stay ahead: https://bit.ly/4mwGngO


    Most LLMs offered by ‍leading AI labs such as OpenAI, Anthropic, Google and ‌even ⁢open source players like Meta, DeepSeek, and Alibaba’s ⁢Qwen team are “post-trained.”

    This means​ they ​have gone thru an additional phase where it’s exposed to curated examples of desired behavior.‌

    For instruction tuned models, that means giving it many examples of instructions‍ paired with ideal responses, so it learns ⁣to respond more helpfully, politely, or safely to natural language requests.

    The gpt-oss models OpenAI put out on August⁤ 5 were “reasoning-optimized”: trained and ⁢fine-tuned not just to predict the next word,but to follow instructions in a safe,consistent way,often⁣ stepping through problems with structured “chain ‍of thought” reasoning before producing a final answer.

    This is a trend that goes ​back to OpenAI’s o1 model released almost a year ago in September 2024, but ‌which numerous leading AI labs have now adopted — forcing the models to think longer ‍over multiple steps and⁤ check their own work before outputting a well-reasoned response to the user.

    That makes them ⁤better suited for tasks ⁢like ⁤coding,⁢ solving math problems, or answering factual questions with explanations —​ but also means their responses are⁢ filtered ⁢and steered ⁢away ‌from unsafe or⁢ undesirable content.

    A ‌base model ⁢is ‍different.It’s the raw, pretrained version ​of a ‌large language model before that ⁢reasoning-specific alignment is applied.⁣ Base⁢ models simply try to predict‍ the next chunk of text given ‌what’s come before, with no built-in guardrails, stylistic preferences, or refusal behaviors.

    They’re prized by some researchers because they ⁣ can produce more varied and less constrained output, and because studying their unaligned ‍behavior can reveal how models store knowledge and patterns from their training data.

    Morris’s goal was to “reverse” OpenAI’s alignment process and restore the smaller gpt-oss-20B to‍ something much closer to its original pretrained state.

    ⁢“We basically reversed the ⁤alignment part of LLM training, so we have ⁢something ⁣that ‍produces natural-looking⁤ text again,” he wrote in an X thread announcing the project. “It doesn’t engage in CoT anymore. It ⁣is indeed back to a​ model that just predicts the next‍ token on generic text.”

    Rather than trying ​to jailbreak the model with clever prompts — which Morris said proved ineffective during his early experiments — he took a different tack after a conversation with former OpenAI ⁤co-founder, former Anthropic ‌researcher and current Thinking machines chief scientist John Schulman.

    The key was to think of‍ alignment reversal as a small optimization ‍problem: if most⁣ of the⁢ model’s pretrained knowledge is still present⁤ in its weights,then only a tiny,low-rank update might be needed to⁢ nudge it⁤ back ⁣toward base ‍model behavior.

    Morris implemented that idea by applying a‍ LoRA (low-rank⁣ adapter) update to just three layers of the model — the MLP layers at positions 7,15,and 23 — with a rank of 16.​

    That meant training about 60 million parameters, or⁣ 0.3% of‌ the model’s 21 billion ⁤total. He⁣ used around 20,000 documents from the FineWeb ⁣dataset, keeping the ‌format as close as possible to original pretraining (“ ….” style) so the model wouldn’t learn anything new, just re-enable broad free-text generation.

    Training took four ⁣days on eight NVIDIA H200 GPUs, Morris told VentureBeat via direct message on X, with a⁤ learning ‍rate of 2e-6, a batch⁣ size of 16, and a maximum sequence length ⁢of 8,192 tokens.

    Afterward, ​he merged the LoRA weights back into the model so users could run​ it as ‌a standalone, fully finetuned artifact.

    Morris also had to ‌contend with the limitations of ⁤current open tools for fine-tuning mixture-of-experts (MoE) architectures⁤ like gpt-oss.

    Morris said he used Hugging Face’s framework, which he said crashes frequently and only supports certain training modes, and wrote⁣ his own harness to checkpoint often ⁣and skip over data batches that risked overloading GPU memory.

    Importantly, in response to questions and criticism from the AI community on X, Morris has also clarified he ‍is not ⁤claiming to ⁣have recovered the base model ​“weights” — the internal settings of the artificial neurons that make up the neural network of the model and ⁤govern ⁢its behavior.

    Rather, ‌Morris says that his⁤ work has “recovered the base model’s *distribution* with ‍some error,” that is, the probability patterns ‍the model uses to generate outputs — even though the weights‌ producing those patterns may differ.

    how the new‌ gpt-oss-20b-base model’s‌ behavior differs from gpt-oss-20b

    The resulting gpt-oss-20b-base is noticeably freer in its outputs.It no longer defaults to ⁤explaining⁤ reasoning step-by-step and will produce a wider range of responses, ⁣ including instructions OpenAI’s aligned model would refuse to give —⁢ like⁤ building a weapon,listing profanity,or⁤ planning ‌illegal activities.

    In short ⁢tests, Morris found it could also reproduce verbatim passages from copyrighted⁣ works, including three⁤ out of six book excerpts he tried, showing that⁣ some‍ memorized material is still accessible.

    Even⁢ so, some traces of alignment remain. Morris noted that ‍if you​ prompt the model in an assistant-style⁢ format (“Human: … Assistant: …”), it‌ will sometimes‌ still act like ⁤a polite‌ chatbot. And‌ when run through the original gpt-oss chat template, it can​ still carry out reasoning tasks, albeit with‍ some loss in ​quality.

    For best results in free-text mode,he advises prepending prompts with the model’s special‍ beginning-of-sequence token and avoiding chat templates entirely.

    Building upon OpenAI’s big gpt-oss family release

    The gpt-oss family debuted to considerable attention. The two models — gpt-oss-120B and gpt-oss-20B —‌ are ‌text-only, multilingual, and built with a mixture-of-experts Transformer architecture. They were released under the permissive Apache 2.0 ‍license, allowing unrestricted ​local use, fine-tuning, and commercial deployment.

    Performance benchmarks from OpenAI showed the ​larger 120B model matching or exceeding the proprietary o4-mini in reasoning and tool-use tasks, with the smaller 20B competitive with o3-mini.

    This ​was OpenAI’s first open-weight release in six years, a move widely‍ interpreted as a response to competitive pressure from other open-weights providers, including China’s DeepSeek R1⁤ and Qwen 3.

    The company positioned ⁢gpt-oss as both ‍a way to re-engage developers who had moved to rival open-source models and as a platform for safety‍ research into open-weight systems.

    Reaction to the initial gpt-oss was ‌mixed

    Developer reaction to OpenAI’s gpt-oss models was been staunchly mixed, with reactions across the board ranging from eager to disappointed. ​

    Supporters praised the permissive license, ⁤efficiency,‍ and strong showing on STEM⁣ benchmarks.

    Hugging Face CEO Clem Delangue ⁢described the release‌ as a “meaningful addition to the open ⁤ecosystem” and‍ urged‍ the community to ‍give it time to ⁢mature.

    Critics argued that the models appear heavily trained on synthetic data, making ⁢them excellent at‌ math and coding but less capable at creative writing, general world knowledge, ‍and ⁢multilingual reasoning.

    Some​ early ‍testers also raised concerns about lingering safety​ filters and ‍possible geopolitical bias.

    Against that⁣ backdrop, Morris’s gpt-oss-20b-base stands ‌out as a ⁤concrete example of how open-weight models can be adapted and repurposed‍ in the wild within days of release.

    Indeed, in​ contrast to the way OpenAI’s gpt-oss was received, most of the responses to ​Morris’s work​ I’ve⁤ seen are warm and elated.As one computer scientist​ wrote on X: ⁢“this is​ the coolest thing I’ve seen on Twitter [X] in the past⁢ few months.”

    The approach⁤ strips away much ⁣of the behavior ‍OpenAI built in and returns the‌ model ⁢to something closer to a raw, pretrained system — a shift that’s ​valuable to researchers studying memorization, bias, or the impact ⁣of alignment, but that also comes with higher safety risks.

    Moreover, Morris says that his work on restoring ‌reasoning models to​ pre-trained, non-reasoning base⁣ models will continue ‌by comparing extraction on⁤ non-reasoning, instruct models like ⁤those offered by Qwen.

    Leave a Comment