5 Questions U must Ask Every Vendor with Gen-AI

Posted by

Vendors are a clever bunch of folks. Whether they are selling learning technology, content/courses, or even learning systems. I saw this firsthand at LTUK. Those who have Gen-AI in their system, platform content/courses, and technology are noted very visibly. Some who didn’t even have it; pitched it at vendor sessions on the floor. Lots of attendees listened intently. Their eyeballs fully focused on the information at hand, as though Svengali was in front.

While observing one session, it was clear the person just scraped the internet for their presentation. Still, eyes were on them. After all, generative AI, aka Gen-AI, is all the rage, and darn it, if anyone is going to add it, it’s us in the L&D and Training communities. First, though, we must ask the right questions. Which for many isn’t so much a question, as a “wow, eyes are glazed off,” listening intently and seeing Gen-AI in action.

I get it. It’s new. If done right can be spun into a magic elixir that who knows could be the alchemist’s dream of turning anything into gold. And there lies the problem. Although Newton actually thought he could turn X into Gold. Good thing he had that Gravity validation as a backup.

Filtered was one of those vendors promoting Gen-AI in their system. This, of course, made me want to know more. I zoomed over there, as fast as my feet could go, and asked to learn more about Gen-AI in their system. I believe it is ChatGPT. One of the “new components was curation plus, the other – okay, can’t recall, but it had a “plus” too.

It sounded great. I can use Gen-AI in this system to generate a learning pathway and curate content. Oh, boy! Then I saw it, and started to ask questions.

  • How do you count for hallunication? Do you tell people they might get hallucination with their outcomes? (The answer BTW, was no.)
  • It looks like a search engine function – (this was with one of the plus things) – why not just have a search bar? (They noted that there were vendors coming by who were impressed by said search function and wanted to add it to their systems.)
  • This is a chatbot and the assets are the ones I pay for correct? (Answer is yes)
  • With the AI pathway, while it generates each header, the assets underneath are the ones I pay for or must own correct? (Answer is yes)
  • Getting back to the hallucinations what can I do on the back end to make sure the content they see is accurate – this was under the assumption that the content they pull down from the net (which is free content), is well from the net. (Answer – We have identified several sources that are vetted by us, that you can access. If the client wants to add additional sources we can add them.) In other words, Filtered’s Gen-AI doesn’t scrape the entire internet – rather selected sites that Filtered has “vetted”, whatever that entailed that the Gen-AI will go to. You still access free content, just selected “free content.”

Now you may ask yourself why would a vendor limit such options, or tweak it in such a way, as above. It has to do with something called Tokens. ChatGPT uses them. GPT4 uses them. It’s how those vendors, make money. OpenAI is the people behind the ChatGPT, GPT4, and several others they have – it’s their method of revenue. Any open-source LLM you select can be modified, tweaked, and so forth. Then it is added to your system, learning technology, authoring tool, and content (as it relates to our industry). I won’t get into the technical of “how to” but I will say that it isn’t a free deal for any vendor out there. If you, Company X want to add Gen-AI to whatever you are doing, then an Open Source LLM is the route to go (some are 100% free, others are fee-based). And those that are fee-based can get very expensive, which is why – as it relates to our industry – vendors will try to reduce the cost in any way possible.

The Five Questions will come, but first, you need to understand at a high level about Tokens, because it is part of one of those questions you will ask any vendor who has Gen-AI.

Tokens

To go further down into Tokens, I recommend just doing a search around Gen-AI and Tokens. For the basics, it works like this – anytime the AI processes text it becomes a token. “Tokens can be words or chunks of characters,” noted Azure OpenAI.

Another way to think of tokens is pieces of words.

Multiple sites use the example of “hamburger.” It looks like one word, but with Tokens, it counts as three words. “Ham,” “bur,” “ger.” And you charged for each token.

If the language is non-English, the token numbers increase. “‘Cómo estás’ (‘How are you’ in Spanish) contains 5 tokens (for 10 chars). The higher token-to-char ratio can make it more expensive to implement the API for languages other than English.” (OpenAI)

OpenAI provides a Tokenizer, which allows you to see the number of tokens based on the text. Please note that with tokens, the way it works is with a white space prior to the word itself. Example: ” Bye”

I tried it out, with the phrase – “Vendors who use proprietary models without stating whether it is built on an open-source model, is a concern.” The number of tokens – 22.

Okay, I added some words/characters but what is the basis, or method for processing and fees?

I did a deep dive around the net seeking to find a simple explanation for explaining this and choose Azure OpenAI. “The total number of tokens processed in a given request depends on the length of your input, output, and request parameters. The quantity of tokens being processed will also affect your response latency and throughput for the models.” (Azure OpenAI).

In other words, the moment the end user types in a word, phrase, or whatever, and hits the button for a response, the cost for tokens begins. When the end user follows up, more token costs. As you can see, the numbers will rapidly increase. For example, OpenAI’s ChatGPT is free. The cost to do this is reportedly, 700K a day. That is 700 hundred thousand dollars a day.

Pricing for tokens varies. That is to say, if a vendor uses any of OpenAI’s LLMs the cost per token isn’t the same. Nor is the price for Stability.ai the same as OpenAI, which most folks will recognize with ChatGPT and their latest version, GPT-4.

Finding the pricing is another challenge for those of curious nature. Here is OpenAI’s pricing. I should note they offer two price models, but vendors would use the token more than say the subscription, which is what I use. The interesting piece around some of OpenAI’s models is that training your data is fee-based, even before actual usage.

Other LLM Models

There are a lot out there and they continue to be added at a faster rate than most anticipated.

There are open-source AI that is 100% free, but these are based on other Gen-AI models, thus while free, it isn’t say, a foundational LLM fee-based one. Thus, you might find vendors who went 100% free open source using any one of these (a way to avoid paying token fees). This is just a shortlist.

  • Vicuna
  • Stability.ai (They offer 100% freebies, and DreamStudio which is fee-based. Even if you only want to use the API) Their newest one is StableLM.
  • GPT4All
  • LAION
  • Dolly-2
  • Alpaca (According to the site, you can reproduce it for less than $600) – But I slide it into free here.
  • Koala (Built on LLaMA 13B)
  • Github (A repository of a lot of 100% free open-source models) – This link goes to the Gen-AI ones.

Most people will recognize OpenAI – but are likely unaware that they offer multiple models including Dall-E2 (which creates and edits images (Bing Create uses this) and Whisper which is a “speech recognition model that transcribes, identifies and translates into multiple languages.” (OpenAI)

Google is expected to announce the next version of PaLM-2 for Bard with other LLM models to follow. It is their own model and not available elsewhere. To learn about what PaLM was trained on (for those curious, read here)

A Short List – These are all “open source” LLM (Fee-Based)

  • LLaMA (from Meta) – From LLaMA, there are numerous offshoots – view
  • Azure OpenAI (Microsoft) – For those using Azure
  • Amazon Titan (Amazon) – Available only to those who use AWS – They also offer Bedrock, which allows you to build from other LLM (for example – you have Stablity.ai and want to build from that (foundational simply means it is the foundation of LLM – which you need, and thus all of the above including OpenAI are foundational).
  • BERT (and others from Google, for use with Google Cloud)
  • Databricks (I place this more into the AI Agents category which I believe is really the power for our industry, and coverage on how it works, etc for another time).
  • Midjourney (Image Creator)

Why would someone want to pay for an LLM when freebies are out there?

One of the key advantages is the parameters. GPT-4 offers 1 trillion parameters. ChatGPT is next up with around 75 Billion. Personally, it just comes down to what is your preference. Vendors who raise capital/funding are in the business to make money. Thus, by requiring it at some point, you need to do a bit more than the freebies.

What does this have to do with the five questions, because one of them is (And the most important)

  1. What Learning Language Model do you use with your learning system/technology/authoring tool/content platform, etc.? They may offer two or more, which experts in the LLM space recommend, but I doubt at least right now that is the way vendors are going.

A vendor might respond with a proprietary model, which sounds amazing but it has to be built on something. They do not have the unlimited funds to build 100% from scratch, because a) the computing power to run gen-ai is massive – high carbon footprint, b) to stay cool due to heat, the cpus need water, and c) It just doesn’t make any sense for them to go that route.

Thus the proprietary model to me, means they built it on some foundational model; made changes to it (as anyone would do, hence the open source part), and said, “Tada, proprietary!” What you are seeking is the foundation (again, LLM are foundational). If a vendor said, hey we built are system originally on Moodle, you might cringe, and yet there are vendors who initially launched their system built on Moodle and then revamped it so much, and so on, that you couldn’t even tell, when they debuted it. That said, knowing at least the name of the model, for those curious, you could explore via the net, what it is, how it works, and so forth. Maybe you don’t care, but I would if a vendor said Dolly versus GPT-4. I would want the latter.

One vendor who reached out to me the other day to show their system using Gen-AI went beyond vague when I asked what LLM they used. They told me that they used proprietary, short, and big ones. Now, I have no idea on what that means and figured to corner them once I see the system (in early June). Short and Big ones? We are not talking about hummingbird feeders here – “Hey, do you want a short one or a big one for those super large hummingbirds?”

Yet, they are pushing this solution out as though no one will dare to ask what the is LLM, let alone that anyone would know what is a Learning Language Model. You don’t need to know unless you are super curious, but the question here is to get the vendor to tell you specifically what they used or are using. Because knowing it will aid you to follow up with a secondary question – which you must ask!

2. What data sets was your LLM trained on? You cannot skip this question and think what does it matter? Because if they trained it on Wikipedia, Internet searches from 2022, Kirkpatrick, and other learning models they selected – that would be relevant. Take ChatGPT for example. The data sets are from 2021. Yes, it learns over time, but data sets initially are relevant.

One vendor who pushes the whole Gen-AI angle with their new authoring tool told me, that the data sets included learning models that were selected by experts they identified or worked with. Which ones? And who are these experts? Unless they raised Donald Kirkpatrick from the dead and pulled a Frankenstein with Gagne, I’m not seeing an expert here. Oh, they also use a proprietary model.

3. If the vendor uses a fee-based open source model, let’s say GPT-4, will you see – as in you the client – an increase in fees in the coming years with your contract?

Remember when I said these models are not cheap due to the tokens? Well, if you think the vendor is going to eat that cost, then Bugs Bunny is right behind you, and the Hindenburg is still a viable mode of transportation.

Just think how much money this is going to cost a vendor. It is going to get pricey real quick, even if they use say part of the LLM and then skirt it by going with a different layer on top of it. Those costs have to go somewhere, and to me, that is in pricing. Pricing has always been an arbitrary model. If a vendor can get $60 per seat per user/year, at 2,500 users, then they have no problem in doing so (this is just an example). I often tell the story of a vendor who wanted to charge me $48 per user/year (for a seat), which I declined and then declined on the next price. The next day, it was down to around $9 bucks. And they still made money on me.

Vendors can hide the price in setup (if they charge it), APIs (if they charge), onboarding, training, and so forth. Thus, when you sign that contract you want to lock in that price for the length of your contract. No, price increases of X percent that some vendors push as though you have to eat the cost for inflation or living costs. None of that nonsense. Lock it in. Because they will not eat that cost. That can’t (referring to fee-based models, and I think some 100% freebies because the time to change the code and create your own exclusive model with data sets is still costing them money). Oh, ditto on the authoring tool, learning tech or whatever else you are getting for your e-learning that charges a fee

4. For those clients who already have a Gen-AI model they are using (within the company) can you API it into their Gen-AI model for whatever data you want to bring in and out? This is going to be tricky here because I wonder how many vendors who have Gen-AI in their system or authoring tool or learning tech or whatever has thought about it. But at some point this question is going to come up, and when it does the vendor is going to have to know the answer.

I can see the inquiry around plugins, because it is already starting to happen (not by client per se; but in general – other solutions not learning/training or e-learning specific). If I were a vendor I would start adding plugins that exist for their LLM, which if they went 100% secret model that nobody else on the entire market, including 100% freebies, they are going to be in for a shock. The idea though you can take your HRIS platform and ask to add it to the vendor’s LLM, isn’t something I would recommend doing. First off, your HRIS, HCM, ERP, Payroll, or whatever has to have an LLM. Secondly, it has to match what the vendor has (especially if the vendor is on AWS – which the majority are, and they are using Titan), OR the learning vendor has to have a solution like bedrock (using AWS as an example)whereas you can have X they can Y and it will work. Remember we are at a very early stage here.

Then there is this thing called data, as in what data is getting passed – beyond serious concerns around privacy, legality and who knows what – which you would need to create legal docs for – you have to ask what will the vendor do with that data – those questions asked, responses, by who and so on.

If your company is not at this stage of having its own LLM (i.e. 100% free or fee-based), then you can avoid this question. And if you have it, and think – “Let’s do it with X learning whatever,” respond back, uh, let’s not, because there are way too many unknowns out there, plus we are at extremely tiny steps here. Even by the end of 2024, I would be leery. Let’s not forget that Gen-AI learns as it goes, and if one vendor experienced what a company did, when they trained their solution on data sets, to which the LLM learned a new skill, that wasn’t part or existed in the data sets (it was a language), we should all be “HOLD,” on matching – this had nothing to do with Q4, but just a surprise of what we are dealing with.

5. How will your system/learning tech/authoring tool/etc deal with halluncination?

I should have placed this as question 2 or 3, because it is the BIG ELEPHANT in the room, and vendors are not so quick to respond in their pitch. Halluncination exists in all Gen-AI. You can’t get rid of them today. So any vendor who says, oh, we have a method to have none or a small number is selling you a magic elixir – is Newton involved?

Hallucination is the biggest issue/problem with any LLM and more importantly, Gen-AI. What they mean is fake or false information. And it occurs. NVidia just came out with a solution they claim reduces hallucination, (NeMo Guardrails to reduce hallucinations but it is truly unknown how well it will work over a period of time.

Fun Fact – When Bard was tested by Google employees, one responded that it was a “pathological liar.” Google still went live with it.

Bottom Line

Here is my promise to you. I’m going to be holding those vendors who push out Gen-AI to the fire. I am going to ask those tough questions, not just the ones listed above, but ones that go further than that. I’ll report my findings.

As LLM and Gen-AI in general advance, I will keep you updated. I plan to create a massive directory you can access via the Learning Library – on Gen-AI tools that you can use for your learning and training. That is coming this summer.

And for vendors who think, nobody is paying attention.

Worry,

Because we all are.

E-Learning 24/7