Since its introduction to the masses in November of 2022, ChatGPT has reigned as the world leading chatbot powered by artificial intelligence (AI). We love it.
And while Google’s Bard and Microsoft’s Bing AI have been unleashed since, neither these, nor the several other AI chatbots that have emerged in the market since then, have managed to challenge ChatGPT in terms of performance and features.
But now there’s a new(ish) AI kid on the block, known by the name of Claude 2 (pronounced “cloud“), and this chatbot is making quite the buzz!
So, what is Claude AI? Continue reading this post to find out more…
What Is Claude AI?
Like ChatGPT’s parent company OpenAI, Anthropic, the developer of Claude AI, is a Silicon Valley-based AI organization. But Anthropic has a distinct vision and approach when it comes to this rapidly advancing technology, one that revolves around the safety of using such models, as well as their alignment with real human values.
Whenever we see stories in the news about how humanity is doomed (and there are a lot of doomers out there!), because AI will soon take over the world, those journalists obviously haven’t been paying attention to Anthropic.
The organization has a focus on integrating safety measures into every step of the development of its Claude AI model.
This is because Anthropic wants to make AI as friendly as possible for every human user. The goal is for AI to become a helpful virtual assistant, rather than an unpredictable and malevolent robot from the Terminator films or a zombie god that turns all the matter in the universe into paper clips.
So, when it comes to how Claude AI actually operates, safety comes first, unlike other AI chatbots we could mention that seem to be winging it, and hoping for the best.
In fact, Anthropic was co-founded by former executives at OpenAI, including the former Vice President of Research, Dario Amodei.
By putting the emphasis on safety and human values in the development of its Claude AI, Anthropic aims to create AI systems that will be more trusted, will reduce any potential risks and will ensure positive interactions with users.
This cool and unique approach to text generating underlies the foundation of the company’s commitment to creating responsible AI that can be used by everyone, are robust and reliable, and that are aligned with human values.
So, What Can Claude 2 Do?
Anthropic’s flagship Claude AI, which is an advanced large language model like GPT-4, has been designed to be versatile and capable of various tasks, and so is positioning itself as a strong contender in the AI chatbot space.
Because of its advantages when working with metrics like creative writing and coding, it definitely makes far a viable alternative to the top AI chatbots players like Bard, Bing, and yes, even ChatGPT.
The Claude AI model offers a wide range of capabilities that are similar to OpenAI’s ChatGPT and Google’s Bard, such as writing poems, emails, reports, speeches, and resumes, as well as summarizing books, and even coding.
One of Claude’s best apparent strengths is in creative writing, along with its ability to write, debug and code in many programming languages, just like ChatGPT can do.
But what sets Claude apart is its ability to work with text-loaded files, so whether you have a PDF to analyze or Word documents to edit, or even spreadsheets and CSV files to process, this AI chatbot will process your content, as well as provide insightful output.
In fact, Claude 2 can handle up to five files at once!
So, as well as prioritizing safety, Claude AI surpasses Bing AI, ChatGPT, and Bard with its superior creative writing capabilities, and when it comes to coding tasks, Claude AI outshines Bing AI in specific cases, and consistently outperforms Bard AI.
ChatGPT and Claude 2 have similar coding abilities, but it’s only GPT-4 that appears to be more capable of handling more complex tasks, perhaps because it has the ability to search the internet, which Claude AI doesn’t.
Obviously, Claude 2 has the ability to do a lot more than what has been mentioned here, such as playing games and translating legal documents. So if you are in the US or UK, and want to give this new AI a try, visit Claude.ai to sign up for the API, which is cheaper than GPT-4 we might add.
At present, Claude AI is only accessible through the web with limited access to the paid API, with a waitlist.
However, several businesses, including Jasper (another generative AI) and DuckDuckGo, have already started pilot programs using Claude 2.
In addition, Claude 2 is integrated into the backend of Notion AI, an AI writing assistant that works seamlessly within the Notion workspace, and Quora offers access to the upgraded chatbot through its generative Poe AI — and you don’t need to pay or have a subscription, although you will be restricted to only having limited access.
To access Claude 2 through Poe, visit Poe.com and sign up using your Google account (if you’re already signed into Quora through your Google account, this will be seamless).
Once you’ve accessed Poe’s interface, you’ll see the list of chatbots available to you, including Claude 2, Chat-GPT-4, and Google-PaLM (all with limited access), along with a Subscribe button at the bottom, should you wish to pay and have unlimited access.
According to Redditors in several of the Claude 2 threads, early usage suggests that this new Claude AI is far less prone to hallucinations, and is much more resistant to jailbreaking attempts, unlike ChatGPT’s DAN. In addition, Claude AI tends not to lecture users like ChatGPT and Bard AI do.
This means that overall, Claude 2 appears as if it could become a compelling alternative to the dominant top three AI chatbots in the field. It has a welcoming and friendly interface, and has a focus on the safety and comfort of its users.
That said, it can still hallucinate in hilarious ways. For example, when I was working to get Claude 2 to graph some data for me, it suggested we “connect on a call and it will share its screen”. Sadly, when I tried option 3, it refused me.
And although ChatGPT-4 does have a slight edge, because of its powerful code interpreter and plugin library, the latter has been trained on more recent data, though it doesn’t specificy a cutoff date like ChatGPT does.
This data set includes third-party datasets, websites from earlier 2023, as well as user data that was voluntarily supplied, and contained about 10% of non-English content.
Safety First: How Claude 2 Works
Anthropic has employed several measures that it says ensures the safety of anyone using the new Claude AI.
These measures have been designed to minimize any potential risks to users, as well as promote responsibility.
Four of the key aspects of this approach include:
1. Safety Development: Anthropic has integrated safety considerations into Claude 2’s development process, and actively monitors and evaluates its performance, behavior, and outputs in order to identify and address any issues that arise promptly.
2. Red Teaming Evaluation: The AI developer carries out extensive red teaming evaluations, which involve subjecting Claude AI to a diverse set of adversarial prompts and tests.
The red teaming evaluation process uses offensive procedures and techniques to help Anthropic uncover malicious threats, vulnerabilities, and potential harmful responses, as well as find areas where improvements are required.
3. Automated Tests: In order to ensure safety and reliability, Claude 2’s parent uses automated tests that are designed to identify and address harmful or unintended behaviors in the model’s responses.
However, Anthropic doesn’t provide any detailed information about the specific prompts, tests, and checks that it is using for benchmarking these tests, so trust could be compromised due to this lack of transparency.
4. User Feedback: Anthropic values user feedback and continuously iterates on the model to improve its safety and performance. By monitoring real-world usage and actively seeking user input, they can identify areas where Claude AI may need adjustments to ensure safer and more reliable interactions.
Collectively, Anthropic intends that these measures will offer a more trustworthy experience when using Claude’s generative AI to ensure the well-being of its users.
Is Claude 2 Woke or PC?
According to this Anthropic white paper, it is claimed that Claude 2 is much less likely to generate biased content than its predecessor, Claude 1.3.
However, it is also acknowledged that this is because Claude 2 will not answer questions that are considered controversial or problematic, or could be deemed discriminatory in any way.
This is most likely why Anthropic has advised against using Claude 2 to create AI applications that focus on physical or mental health and wellbeing, because receiving an incorrect answer in this situation could end up causing someone real bodily harm.
And then there’s its “constitutional” AI concept.
Yes, Anthropic has deemed an AI constitution, which lists a set of guiding values that have influenced the development of Claude’s decision-making process and behavior, such as non-toxicity and helpfulness.
The company believes that this so-called “digital code of conduct” will work to help to make Claude’s AI model both safer to use and more helpful for users, because it defines clearer boundaries and guidelines for the chatbot’s responses.
It should be noted, though, that Anthropic has acknowledged that constitutional AI isn’t a definitive solution to training Claude, and it admits to a trial-and-error process in developing the guiding principles, but says it has made many adjustments to prevent the AI from being too judgmental, preachy or annoying.
However, it’s important to remember that like all generative AI at present, Claude 2 will have limitations, because no AI model is perfect. And although it may seem more unlikely, even Claude 2 could suffer from challenges like hallucinating or generating biased text.
Claude 2 appears to offer an exciting step forward in the development of AI chatbots, in terms of its user-friendliness, power, and versatility. The competition that it brings to existing models like Google’s Bard and OpenAI’s ChatGPT suggests that it is definitely worth taking time to explore.
It’s also important to note that the user experience of any AI model will vary, based on its specific design and training data, and your ability to be able to fine-tune it. And only time will tell whether or not Claude 2 will be able to dethrone ChatGPT from the generative text AI perch.
However, we encourage you to explore Claude AI’s capabilities firsthand, so that you can see for yourself how this innovative and friendly AI chatbot works in real-world applications.