1
0 Comments

What would make an AI trustworthy?

Hello again 👋

In my previous article I posed the question:

"Would you read an article written by AI?"

In this article I attached a simple yes or no poll, and perhaps unsurprisingly the overwhelming majority of voters said no (9/10 votes for no at the time of writing this).

In the comments a theme that became apparent was that most people aren't interested even remotely in reading "creative" writing generated by a large language model.

@captainarm123 summarizes it nicely:

Additionally, AI should be used to complement human efforts rather than replace them entirely. The human touch, creativity, and understanding of emotions are still crucial in many aspects of writing.

I take this as an indication that while AI has become remarkably good at generating coherent, often relevant and applicable bodies of text... There's still something missing.

For now at least, this quells my fears surrounding the notion of AGI.

However it also got me thinking.

Perhaps AI will never be able to replicate the elusive force that is consciousness, perhaps it will.

But the fact remains that we are in a new era of computing that until recently was reserved for the pages of science fiction.

So the question I asked myself was this:

How can AI be used right here, right now, in a genuinely useful way?

Don't get me wrong, there's an entire industry emerging before our eyes presenting answers to this question.

But I think they all (currently) share the same fundamental flaw. A question that I at least, have yet to see a well articulated answer to:

What makes an AI trustworthy?

One example I have high hopes for is Perplexity's approach of citing the sources of information it's drawn upon.

Another is Gemini's approach of displaying a button which you can use to simply Google your prompt.

Are either of these as rigorous as the peer review process used in academic literature?

No.

That said, no system is without it's faults and this includes peer review.

What I think would be neat is some sort of comprimise between these two systems. A middle ground of sorts, where AI responses could be cached in some way or another and fact checked by appropriate experts.

I've decided to pivot my current indie hack, Ezlo.ai in the direction of a Q/A site and I intend to implement a system such as this.

If you're curious to see where it goes I encourage you to navigate to the site and subscribe to my mailing list (the 🎁 button in the top right).

Make no mistake, AI is here to stay. I think it's (alarmingly high stakes) success or failure is ultimately going to come down to how we, the people, use it. In it's current form I view it as a tool, one day perhaps it will be more...

But that's for another post.

Cheers 🍻

on June 11, 2024
Trending on Indie Hackers
I built a text-to-video AI in 30 days. User Avatar 67 comments What 300 Builders Taught Us at BTS About the Future of App Building User Avatar 52 comments I built something that helps founders turn user clicks into real change 🌱✨ User Avatar 50 comments From a personal problem to a $1K MRR SaaS tool User Avatar 47 comments This Week in AI: The Gap Is Getting Clearer User Avatar 35 comments How An Accident Turned Into A Product We’re Launching Today User Avatar 29 comments