OpenAI starts offering a biology-tuned LLM

GettyImages 1421511892

To address LLM’s tendency toward sycophancy and overzealousness, OpenAI says it has made the model more skeptical, so it’s more likely to tell you when something is a bad drug target. There was much discussion about the “reasoning” and “expert-level” capabilities of GPT-Rosalind. We were told that the former was defined as being able to work through complex, multi-step processes, while the latter was derived from the model’s performance on certain benchmarks.

It is unclear whether OpenAI has dealt with the issue of hallucinations, which has plagued various LLMs and may also strike when the system is asked to explain the steps taken by the company to reach its conclusions. Given past experience, it’s likely we’ll see a mix of glowing reports about unexpected connections discovered by AI, as well as examples where it makes clearly wrong suggestions.

However, for now, the company is limiting access due to concerns about the potential for the model to produce harmful outputs when asked to do something like optimize the infectiousness of a virus. At this time only US-based entities can apply to OpenAI’s Trusted Access deployment infrastructure, and the company will limit who can use it. A more limited life sciences research plugin will usually be made available.

As mentioned above, several other companies have provided science-focused agentic LLMs, but they were much less focused than GPT-Rosalind, which is biology-specific. Until we start hearing reports on the effectiveness of this new model, it’s difficult to evaluate whether this focus improves its usefulness.



<a href

Leave a Comment