July 10, 2023
Authored by Richard Robinson, CEO at Robin AI
July 10, 2023
During a typical late-night social media scroll at the weekend, I read a post about the reproducibility crisis in LLMs.
Reproducibility is the name used to describe the ongoing debate in the AI space about companies making lofty claims about the performance of their LLMs to attract new customers and market attention. But, when it comes to showcasing their technology, they fail to deliver - either because the LLMs are unable to produce the results that were promised, or in some cases, the product doesn’t actually exist yet.
One of the main reasons I started Robin AI, was that I believe that legal advice is too expensive, and with the right technology it can be considerably cheaper, faster and simpler. And I believe that addressing these problems is critical if we want to empower people to protect themselves and to restore trust in the legal industry.
Offering large parts of our product to users for free is a core part of fulfilling that mission. Though it’s not without its challenges. Having a free platform is really expensive, it’s extremely time-consuming and it means you’re never more than a bug away from being publicly embarrassed when something unexpected happens. But it’s always been clear to me that to achieve our mission of improving access, we need the industry to embrace technology.
To do that, lawyers have to cut through the noise and make their own evaluations. But technology providers and vendors need to help them too.
Legal teams are responsible for some of the riskiest work in business. They’re generally overworked, understaffed and asked to make the impossible possible. So, understandably, they have incredibly high expectations. But when every company claims to be a specialist in ‘generative AI’, it’s really difficult to separate fact from fiction. Every company seems to have a waitlist or a new feature update, or (even worse) an extortionate price tag attached to their bolt-on AI products (despite bold promises about the exponential power of their technology).
I think that when the stakes are this high, and people are trying to understand how these products can help with their most important work, the best approach is to let people decide for themselves. That’s why from our inception we’ve always had a show, don’t tell ethos at Robin AI.
As legal teams try to become more innovative, they need to find digital tools they can trust and avoid the ones that will waste their time. But you can’t do that if you have no understanding of a tool’s capabilities.
Rather than being told what might work for them, lawyers should be the ones evaluating prospective technology, as they’re the subject matter experts who’ll be best-positioned to judge.
To make sure our technology is as transparent as possible, our approach at Robin AI is to simply allow anyone to test it for themselves - for free. We fundamentally believe that law should move at the speed of business and be as accessible as possible. So, whether you’re a lawyer or not, we still want you to test our platform out for yourself.
That’s why our modular platform is already out there with many everyday users across four continents and is always available to anyone who’s interested in seeing what it can do for them.
So I’m glad that the issue of reproducibility in LLMs and the need for transparency in AI technology is being discussed. I hope other people will be brave enough to show, not tell - we need to make access to these tools and legal services more accessible. You can sign up for a free Robin AI account and test out our platform yourself here.