Last week, I attended a conference in Amsterdam. The conference? World of AI Summit (as in Artificial Intelligence). While, as with any conference, there were good sessions, especially the opening sessions in the morning, I was a bit surprised at how many of the speakers never addressed a couple of points, including the “what happens if your technologist (3rd party) is speaking to another technologist at a company, who doesn’t know or fully understand all the implications of selecting the right LLM, let alone the pros and cons?” I asked this question, and two-panel speakers agreed it was a good question and something to consider as valued. I didn’t do it to score brownie points; I did it because I saw it, and I know it will happen time and time again.
After that one session, a few folks approached me to discuss that question, and one sought advice on what LLMs they should consider if not OpenAI. All agreed it was an excellent question to ask.
Later, I attended a few Gen-AI sessions – I was going to the Gen-AI track, which included ChatGPT.
I came away with a few points – I should note that in the early panel session, where I posted the question, one person was from a university and an expert, and the rest were AI experts across the board. In one ChatGPT session, again, a university person, two AI experts, and a professor from Oxford with a strong background in AI and Economics. I was very impressed by him and his retorts.
I raised my hand at the end for a question. The key was regarding token fees because nobody in the panel addressed it, which has real consequences. Any company that implements an LLM must recognize that the token fees will not be cheap – usage is the key role here. It will impact financially, and thus, what do you say to a company or tell them to be cognizant of this fact? The economics gentleman agreed about the token fees and cost impact. He stated that a company must be willing to take a risk, knowing the financial implications of using Gen-AI (assuming it is not 100% free, even then, though there is always a cost) and that risk-averse companies should hold. Yet, if you look at the number of companies just diving in without consideration of the token data fees – you will see people are not thinking or understanding or getting the whole picture.
The biggest takeaway from the event was the heavy focus on ChatGPT, which is fine as it is the leader. Yet, many other LLMs are legit competitors to ChatGPT and other LLM companies that should be considered besides OpenAI. When you hear about those latest cool features, which are not ChatGPT, that isn’t coming to the freebie version; it is coming to the fee-based one, ChatGPT Plus or Enterprise ChatGPT (company version). OpenAI added Bing (to ChatGPT Plus, coming soon) because of Bard and Google. Amazon invested four billion dollars with Anthropic, the makers behind Stability.ai (not a fan of), and the latest version Claude2 (now referred to as Claude). They are going to launch a business version of Claude, at some point.
They wouldn’t do it just because they like dropping that money. Plus, they have Titan (now available for commercial use) and Bedrock – which is amazing, and yet so few folks are aware of it – once you see it, you will be like – holy moly – because it has the potential to solve the inevitable interoperability issue that will come some day. AI21 Studio, which received a lot of capital, is a pretty cool LLM. Cohere is a legitimate competitor and another LLM. And the list goes on.
I heard from numerous folks that they were unaware of other LLMs. And there weren’t all newbies, by the way. Quite the opposite.
The Learning Library BTW, has a few sections around Gen-AI, including comparisons between Claude and ChatGPT, ChatGPT Plus, and Claude Pro.
ChatGPT the answer?
One Learning System vendor told me they hired a consultant to provide them with a recommendation for the LLM they should use. This so-called expert’s selection? ChatGPT. The logic behind it made no sense, by the way.
Docebo switched from OpenAI (although I never saw ChatGPT or any other LLM from OpenAI in the Docebo system per se, the company they acquired recently was using an LLM from OpenAI) to Google Cloud (I’m still working on which LLM they are using). They are through the anomaly. Every vendor I have spoken with who has added an LLM (the foundation you need for Gen-AI) has gone with OpenAI or Azure AI, which is Open AI. It’s odd. Each of them is using ChatGPT, with some going with the GPT Model, which enables them to use ChatGPT 3.5 turbo (which is ChatGPT with some token fees) or move up to ChatGPT Plus (formally known as GPT-4), which incurs a higher token fee, and a lot of financial impact. Some vendors just went with ChatGPT to avoid paying fees, which is about as valuable as peanuts with CrackerJack. I swear the ratio was 50 peanuts to one corn thing. Nobody wants those peanuts.
I should note here that authoring tools, including AI-only ones, are going heavy with ChatGPT. Again, I have yet to see a vendor select any other LLM out there.
Lucy.AI, a 3rd party AI solution that SAP will implement, and a variety of entities use, wait for it, ChatGPT – the Plus version.
Arist, the darling of quite a few companies? ChatGPT Plus.
I can’t say why all these vendors went with ChatGPT 3.5 Turbo (lower token fees), ChatGPT Plus, or the GPT model, which includes both, and then whatever comes next – and trust me, those token fees will be higher, but I can surmise for some why they did. They weren’t aware of others, or they looked at a couple but didn’t know what else was out there. Sort of like folks looking for a learning system – regardless if it is an LMS, LXP, Learning Platform, etc. They are aware of only a few and zero there, rather than going full throttle and looking everywhere (I think way too many think Trust Radius and similar ilk are the most trustworthy – they are not).
What are they using ChatGPT or Azure AI (again, it’s OpenAI) or another Gen-AI (LLM) offering for?
Overwhelmingly it is for content – as in their content creator tool (formally known as an authoring tool). The premise? Create content quicker. Thought Industries is one such vendor. It can generate the content, and then you can rewrite it.
What I do like about is that you are given multiple outputs, and you pick the one you want. The downside though (and so far, I haven’t seen a vendor with a content tool built-in achieve this) is you can’t give it a up thumbs up or down, and identify what is wrong about it — if it is wrong. You can change the text within the output, but unless you the former, the AI will never know and assume it is a correct response.
A few use it to create a course pathway or plan or modules, whereas it will give you the header(s) for each module/path, and then you can add your assets, OR you might get lucky where it will produce a description, for example.
Next up? Assessment tools. LearnUpon does it (just released), Knowledge Anywhere, and others.
Then, some go further; they use it as almost an answer engine. This is a good one because you would go (assuming you are using ChatGPT) with either the full freebie or, for many vendors, ChatGPT 3.5 Turbo. Why the lower token or zero fees ones? Each prompt, which is made up of characters, adds up. Think 2,500 people asking questions (using prompts). Every character, which behind the scenes includes ” and a space, starts to add up.
I liked what Learning Pool did, where using your own content (PDF, etc.), an end-user can type in the start of a word or words, and the system goes right to where that statement or paragraph is, without showing you the entire document (something Lucy.ai for example, doesn’t do – it shows it all). Granted, a vendor with machine learning could pull this off, but the bonus and win for me was that it provided the page number. Thus, if you wanted to go at some point into the material, you could find it with the page number.
This is low-hanging fruit for the usage in my opinion, so why aren’t other vendors using this?
As with any Gen-AI LLM (or LLM, for that matter), even with your content, you will get hallucinations. That’s reality. And any vendor who tells you that with your stuff, you will never get them, or it is always accurate, is lying. I had to say this bluntly because if they are selling you a system with AI – and that AI is Gen-AI (the key here), they should know this fact. There are no excuses here – to which I never understand why vendors pass the buck on you – expecting you to know it, even though they push and heavy tap it with their marketing.
Could their LLM do more than what you stated above?
100% yes – but it depends on what LLM they use for Gen-AI. If they are using Claude, for example, you can drop a PDF or any type of file, such as an Excel file, and it will automatically output a summary. If it is ambiguous – i.e., what you are asking- it can still pull the summary if the file is more accurate and better.
For those using ChatGPT Plus, they could rock it because OpenAI is adding new capabilities all the time. The image thing is amazing. Just look at some of these examples, and think how that could be used for your learners – and then go – “Why are those systems using ChatGPT Plus not doing this?”
On a side note, your L&D, Training, and HR departments do not have to wait for your learning system to do this, kind of cool stuff – if you choose to try out ChatGPT Plus. The fee is $20 per month. I use it and love it. I also use Claude Pro ($20 a month) and a couple of others because not one and only one is perfect. Each has strengths and weaknesses. And BTW, you can try out for free A21, which I also play around in. It’s a very cool offering.
If I am a vendor-learning system or other learning technology, what should I be aware of, before I add Gen-AI?
- What do you want it to do? Why do you want to add Gen-AI? Is it because your clients are asking for it? Because you recognized you need it? Another reason?
- Did you look at the various LLMs out there besides OpenAI and Azure. There are a lot of LLMs out there, and one that intrigues me (not yet live, but Google is having a few companies test it out), is Gemini. Now, that could damage OpenAI realistically—a potential game changer.
- Do you need it now, or can you wait until 2024? If you say 2025, you are going to be far back.
- If you add it, what are some of the low-hanging fruit you could do immediately? And, what are some things you could add next year that would be a USP (unique sales proposition) for you? Do you want to be like everyone else, Or do you want to truly be a leader?
If your CTO, IT team, or development team pushes back because they do not want to do this, let’s remember who is in charge. The last time I looked, it was the CEO. If you think none of my clients are asking for it, this isn’t the time to think that way. Nobody goes yuck when they see it. Think this way: if your clients saw your new updated UI, do they tell you they want it ahead of time? This whole – “we let our clients drive approach, which you should never do anyway, is non-applicable here.”
- Do you have the budget to do this? Can you handle the costs? One way around this is to do machine learning (a form of AI) with Gen-AI or just go only machine learning (which many vendors have or still have, actually)
- Are you aware of the number of tokens per prompt? There are always caps.
- I’d ignore the number of parameters. It sounds impressive, but how many are you genuinely going to use? I highly doubt 1 trillion (yes, an LLM has it).
- How will privacy and security issues be addressed? Just because your system has it, does your Gen-AI LLM too? I mean if you have to strip out PII with the Gen-AI, how will that work?
One option is to look at RAG (Retrieval Automation Generation) – which also increases accuracy -, but again, nothing is 100% with Gen-AI (LLM). If you use Azure or OpenAI, for example, you can get a plugin for it.
In just a short period of time, numerous learning system vendors and learning technology vendors have added Generative-AI. Yet, overall, it is limited to what they offer. Are text-based retorts or outputting faster content the best they can do? For example, if they are paying for ChatGPT Plus, the answer is no.
Maybe it is in their roadmap to do more.
Maybe it isn’t.
Maybe they are unaware of what is out there, and rather than waiting, testing, and validating with a rollout in 2024, they felt the need to push quickly and figure it out later.
All options are on the table.
The answer is yes for those who have asked whether I provide services around Gen-AI for L&D/Training or other departments for their employees or customers, members, etc. – that are using it already, and what you need to do, etc.), Learning Systems, Learning Tech, and companies in general. I offer workshops and speaking (on-site or via a webinar)- and you can be anywhere in the world and a bundle of options. Contact me to learn more.