xAPI, LRS – The Interview

Posted by

Today’s post is quite unique since it takes a different path than previous posts.

xAPI and LRS (Learning Record Stores) have been in the news and frankly everywhere within the e-learning industry in the past 1.5 years.

Part of the problem though is that there is a lot of information out there, some of it erroneous in nature and honestly, equally confusing.   In early December, I decided to present the real facts, the real insight and eradicate the confusion.

The challenge was how to do it, since it contains an extensive amount of technical jargon, that cannot be easily removed. 

While I am able to provide a variety of insight and details on the subject, I by no means would be a foremost expert on it. 

This is where I had an epiphany.  

Reach out to someone I know who is not only the best and what he knows, but is the foremost expert on the subject and present a special blog post. 

With my journalism background in hand, I contact Aaron Silvers and inquired if he would be interested in doing an interview style post. 

This is the interview.  

Aaron E. SilversAaron, can you tell me a little bit about your background and how you became involved with ADL?

First off, Craig, thanks for having me. You have some terrific questions and I’m very pleased you reached out.

In 2003, the eLearning startup I worked for shuttered its doors as our customers moved to adopt SCORM, and we had no idea how to do that. Two months after my layoff, a company called CTC recruited me to work for ADL.

My job was to help author SCORM 2004, focusing on how to develop content that would work in SCORM systems. It was a no-brainer to take the job. Any innovation that could shut down as many companies as SCORM did was something I wanted to master.

I worked with ADL from 2003-2006 and participated in the Technical Working Group afterwards. I helped with LETSI in 2008-2009 while I spent most of my working life at W.W. Grainger, Inc.

I worked with business analysts, IT and communications teams on governance for these new technologies. We created a strategic framework that mapped knowledge sharing to measurable gains in productivity, quality and revenue. This work informed the conversations with several current and former ADL friends.

These conversations became something we called “BAQON” — a Brokered, Anonymous acQuaintance Open Network.

In early 2010 we pitched BAQON architecture to Bob Kahn at the Center for National Research Initiatives.

Bob Kahn is the same guy to whom Tim Berners-Lee pitched the World Wide Web. Bob Kahn thought we were crazy, but not so crazy that it couldn’t work. Over the next four months, ADL brought the team and I on board to work on what would become the Training and Learning Architecture, or TLA. The first technology in that architecture was the Experience API.

What is Tin Can exactly?

“Project Tin Can” was the name Rustici Software came up with to describe the work ADL contracted them to do. The project was to think about a new way to approach learning that based on current needs. Their work included user research.  

They collected suggestions on a UserVoice site. They conducted one-on-one interviews with many eLearning professionals. They reviewed almost 100 white papers authored for LETSI, who in 2007-2008 was looking at what a “SCORM 2.0” might look like.

The name “Tin Can API” stuck around once ADL advocated to use the prototype produced out of “Project Tin Can” as the basis for the Experience API (“xAPI”).


Last heard ADL calls this new standard the Experience API?  (on a side note – I – me, was always told that xAPI and Tin Can are one in the same)

ADL, IEEE and many people and businesses talking about this technology call it xAPI.

Right now, “Tin Can” refers to xAPI. They are one and the same. “Tin Can” is a trademarked name owned by Rustici Software.

They have vowed to give up the rights to any organization taking stewardship of the specification. Unfortunately, since ADL is a US Government organization, they can’t take possession of the registered trademark. ADL can’t file for a trademark themselves, so for now Rustici Software is stuck with the trademark for “Tin Can.”

That said, Rustici Software released the “Project Tin Can” work to open source.

They continue to contribute much to the specification effort (open source code libraries in many languages. They’ve authored a great deal of code for the xAPI Conformance Test. I have strong faith that if there is an industry organization to steward xAPI and it was willing to take the moniker of “Tin Can” from them, they’d be more than willing to grant it.

Could you explain in layman terms what is xAPI?  I know that many people were told that SCORM didn’t play well with mobile and that is why xAPI came about.  Is that true or is it for another reason?

In layman’s terms, xAPI is a means of sharing a record of activity about some piece of content or media with another system. That sounds like a pretty dry and simple thing. Because it’s so straightforward, it is open-ended to how many ways someone can use it. That’s what makes it so powerful.

What spurred us to seek data interoperability as opposed to straight up focusing on making SCORM work for mobile was a few things.

With regard to mobile… SCORM *could* play well enough with mobile.

As early as 2004, you could find demonstrations by the Korean Education and Research Information Service (KERIS) of SCORM running on COMPAQ PDAs.

The problem was just that there was no *one* way to do it. The nature of mobile devices became too complex too quickly to wrangle a solution that industry could adopt. As much as SCORM content back in the day required a futzing before it worked, it was a closer to a turnkey solution than anyone had with mobile.

Our work revealed a bigger challenge once we picked long enough at how to do a mobile SCORM. SCORM Version 1.2 was so popular and so adopted that people weren’t even bothering with SCORM 2004. This was regardless of the resources that went into improving SCORM Version 1.2.

By 2008-2009, SCORM Version 1.2 was so entrenched that no vendor really wanted to touch SCORM at all. If anyone was going to address mobile, social media, games, simulations or any other use cases, it would need to be outside of SCORM.

Doing something “not SCORM”  liberated us to think about such challenges differently. And that got my team thinking a lot about the interoperability of data. This focus, inspired by the ActivityStrea.ms working group, informed the team on Project Tin Can and eventually the Experience API.


16689643_sI was told xAPI is in IEEE’s hands?  What is IEEE?

IEEE is the Institute of Electrical and Electronics Engineers — an international, industry standards body. In our world, they’ve standardized things like the Learning Object Metadata model used in SCORM packages. IEEE also specifies USB and WiFi and a host of other standards we take for granted but help ensure things “just work” as expected.

I’m the chair of the part of the IEEE-LTSC (Learning Technology Standards Committee) that is looking at standardizing xAPI. We are in the process right now of getting the stamp of approval from IEEE-SA standardizing the part of xAPI that deals with the activity statements.

That happens with what’s called a “Project Authorization Request” into IEEE’s New Standards Committee (or “NESCOM”).  

Currently, I’m helping organize an industry consortia called “Connections Forum” to help with making the PAR for xAPI.

It’s going to start with the information model and a data binding for it — in laymen’s terms, the part that’s about what the data looks like in xAPI. It’s the piece that many people agree with and are interested in using even outside of learning industry contexts.

In time, through the Connections Forum, we may write or adopt standards for other things currently described in the xAPI specification, like LRSs, and probably things that aren’t in there but are possibly closer to the scope of what is in ADL’s TLA.

 So are you saying there are now two standards doing the same thing with two different names?

There is one specification group supporting one spec; many people call it by two different names. “Tin Can” has been well publicized and, in fairness, ADL used it quite often in the first months of the specification work.

There is one specification: xAPI. The community is all working on that together.

Very recently, there began concurrent activities related to xAPI. I am leading the effort to standardize xAPI, and that is overlapping an effort within the current specification group to scope out what Version 2.0 might address.

At the same time, a large group of adopters are addressing conformance requirements for Version 1.0.3 of xAPI, and the next bit of work for that group is to clean up the document so different functional parts of the spec are more distinct, so that further work might be scoped to certain parts of the spec without necessarily impacting other parts of the spec.

The name issue is confusing to the industry and the market for adoption. People miss out at conferences because they’re looking for “Tin Can” but the sessions all read “xAPI.”

That’s a challenge, but, not to make light of it, it is (at this time) not a hindrance as far as the progress on the specification itself.

There are always arguments about the minutiae of specification requirements, but the naming thing isn’t really stopping the collaboration on the spec itself, though I would bet that everyone using the same name would encourage more people to get involved.

It’s probably also worth noting that most people get confused by even what I’ve tried to clearly describe in this response.

It’s probably enough to say that, for the foreseeable future (likely the next two years) there’s not going to be a lot of visible changes to xAPI, and even when big changes do happen, there’s going to be nothing that stops the current use of xAPI. That’s not to say that what we do will be backwards compatible — more likely, from an implementation standpoint, the solutions you count on will easily find ways to work with everything, as they often do already.

 Thank you for that.   Can someone (a vendor) create their own form of something like xAPI, but not per se, actually xAPI?   

We intentionally licensed the specification Apache 2.0 so that derivative works would have no impediment. That’s the means by which I’m able to lead the effort in IEEE. The way government works, it’s very difficult to simply hand over control of a spec, even as it’s developed to be completely open source.

This was a lesson learned as ADL attempted to transition SCORM to another party, but couldn’t.

To the spirit of your question, it’s possible a vendor could create their own spec based on xAPI, but bigger questions come to mind — why would they? What would they gain? Without the community in place, the adoption that’s pretty easy to follow from anywhere, the conformance testing being developed, the standardization activity… anyone who’d want to do this would be so much better served working with the community we already have and the industry that’s coming together around xAPI.


Conformance requirements?  Standardization?  Could you go a little bit more into this using layman terms?

Absolutely. A lot of people talk about SCORM Compliance or xAPI Compliance, but that’s really not the right wording. Compliance means someone has to legally “comply” with something, like it’s a law. There’s no law for SCORM or xAPI, like there is for accessibility with 501c.

What people really want to know about is if something conforms to the specification — meaning “does this product actually do xAPI the way the spec says it must?” Specifications, when done well have conformance requirements that talk about all the things an implementation MUST or MUST NOT do. xAPI has this. So does SCORM.

Where it gets tricky, particularly in terms of what’s on the market, is in all the things a spec says implementations SHOULD, SHOULD NOT or MAY do.. because those aren’t MUSTs or MUST-NOTs. Technically, a vendor could ignore all the SHOULDs and MAYs, and as long as they follow the MUSTs and MUST-NOTs, they’re conformant.

What we’re doing right now with xAPI as a specification is focusing our efforts on filtering out as many SHOULDs and MAYs as we can so there’s little ambiguity about what it means to be conformant. That makes it easier to create tests. When we have tests that can be run by third-parties, then we can get into certified products — and that’s where we as an industry really need to go, which is why we’re working hard to get Connections Forum rolling even as the spec group is dealing with conformance and standardization begins on what’s specified.

Standardization involves removing the ambiguities of the language in the specification for even broader, more massive adoption.

Instructions have to be so precise that in every case, implementation of xAPI “just works.” Light bulbs are standardized — you never buy a light bulb that doesn’t fit a standard light socket unless it’s the wrong bulb. USB is a standard — you never can find a USB device that doesn’t fit or work with your port, unless it’s the wrong type. We want xAPI to be just that reliable.

 Thank you.  Let’s jump into Learning Record Stores (LRS), another term that is creating some confusion in the space.


 What is a Learning Record Store (in layman terms) and why would I want to have one?

A Learning Record Store is how we describe a set of functionality for the part of a system (or a standalone system) that stores the activity data sent to it by whatever creates xAPI activity statements.

The LRS could be the part of an LMS that receives data from an eLearning course. The LRS could be a part of a Client Relationship Management System that does analytics with xAPI on the different sales and support staff using it, and what they do to close leads. An LRS could be a standalone system, like a hub, for a spoke model of a learning ecosystem where many different courses, apps and enterprise systems are sharing data about one or more learning experiences.

Some people have been told that you have to have an xAPI prior to having a LRS.  Is this accurate?  Can I have a LRS without having to use xAPI?

It’s not necessary to use an LRS to do xAPI. A database will do if you’re never going to share the information being sent to it, and you’re never going to send anything to that database except from whatever it’s built for. However, the minute you want to add new activity providers, like Articulate, Captivate or Lectora courses with almost zero-configuration or redevelopment — or if you ever want to share that data across multiple systems, you’re going to want an LRS. Endpoints just make it easy to use with solutions that create xAPI data.

I know that you can have a LRS without it being in a LMS or an authoring tool.  Why would I want to do that?

While there are certainly a lot of enterprises who bought into LMSs in the early 2000s as capital investments, and they’re looking to update their infrastructure without completely undoing what they’ve already established, there’s a significant demand by younger organizations, some small and many mid-market, for less siloed infrastructure.

They’re not looking at learning technologies as capital investments, but as services that can be updated and switched when their needs change. Disambiguating the LMS is likely what drives such use cases for an LRS as its own system, and looking at the market from my vantage point, clearly there’s a good demand for that.

You’re not bound to the assumptions and workflows that come with an LMS. You can be more agile and ad-hoc with your learning approach and your content strategy. You can be more strategic to your company’s goals rather than adopting the design assumptions baked into a given LMS.

I don’t know that route is for everyone. There are many things LMSs do that are hard to replicate, in a cohesive way, with separate systems — things like resource scheduling, certification management… that sounds like nothing but a database, but why build it all yourself when systems already do this for you in a variety of ways.

When you say activity streams, what does that mean exactly?  I mean what covers “activity streams”?

An activity stream is a list of recent activities performed by an individual or group. Typically, with xAPI, a single piece of content or media generates this stream of data.

For example, Facebook’s News Feed is an activity stream.

Many web services have similar implementations for their users, borrowing from a specification called “ActivityStrea.ms.” When we were coming up with an approach for the Experience API back in summer of 2009,

I talked quite a bit with Chris Messina who was an author of the ActivityStrea.ms spec.

There were key differences in the approach we wanted to take with xAPI which is why we ultimately didn’t just use ActivityStrea.ms outright: 1) the vocabulary associated with the activities was focused on commercial web use, and we were looking for something that could be generalized; 2) the data never really was supposed to go anywhere with ActivityStrea.ms — it was meant to stay within the web service that generated the data, whereas we were explicitly looking at interoperability — being able to move the data around was paramount.



One of the items I have heard, is the “interoperability” angle of a LRS data record.  Which means that you can take the data record from one LRS and plug it into another LRS and it works fine.  

That is indeed the value proposition for xAPI. We want a system to be able to interpret, appropriately, consistently and reliably, the activity you performed and the context in which it was performed, no matter where it was recorded.

Many of us remember that “interoperability” pitch with SCORM and as you are aware, it often doesn’t happen.  Can you definitely say that this won’t happen with a data record going from one system to another? (even if they are standalone LRSs)

Interoperability gets a bad rap from ten years ago. If you look at where interoperability is with SCORM today, it’s pretty damn solid for it not being an actual standard.

I’d be lying to you if I said you won’t find bumps in the road with xAPI’s data interoperability — especially from its earliest days, but compared to your other options, which most people can’t even identify, we have a great track record. You’re not hearing the cries about interoperability that we had with SCORM’s adoption at this comparable time — in fact, in terms of adoption maturity, xAPI is on a wholly more advanced and accelerated curve.

Where people will get upset about interoperability will be in the SHOULDs versus the MUSTs of the spec, and on implementation specific things like access to data for analysis. We’re all new at this, learning and improving daily as this industry works across vendor interests in a way that’s unprecedented here and elsewhere.

We have a growing community worldwide that is committed to making this the best damn data interoperability specification there is.

My next questions are related to privacy and security

Scenario:  I work for company X and leave to go to Company Y.  Company Y does not have a LRS.  Where do I put my data record?  Do I need to create a learning locker or something else?

Today and for the foreseeable future, that’s a tough question to answer. It’s entirely dependent on a couple of variables that would be hard to specify, let alone standardize, because every country has different privacy laws — even different states in the US have different data privacy laws now.

Today, it really depends on which LRS Company X has, and what Company X’s policies are about your access to the data about you. Like with Google, Facebook and everything else, there are no clear policies on what rights a person has with regard to the data that’s arguably about them. This is an area I’m very keen on solving, but it’s a much bigger issue than xAPI alone. Still, it’s an issue I hope we can make better with xAPI and model for other data concerns how to do it better.


Sony was recently hacked, and all types of information including social security numbers, private e-mails and so forth was taken and then published out on the Internet.

Should people be concerned that this could happen with a LRS, which captures a lot of information?  Is there anything a company or school can do to avoid this potential scenario?

Data breaches are inevitable in this day and age. The best a company or school can do is to be realistic about this and follow current best practices.

The reality is that what we’re doing with xAPI is far more secure than what LMSs had to do with SCORM, but just like companies and schools, we as a community need to be vigilant as well, which will be helped immeasurably by an industry organization that can provide guidance for the evolution of the technology as well as the implementation by customers and end users.

Finally, let’s talk about the future.

Where do you see xAPI, LRSs going in say three years? I mean what is the potential here?  

The potential, in three years’ time, is that we have the beginnings of a suite of standards all collectively known as “xAPI.”

The information model and a JSON binding (something like what we use today), an XML binding (used by simulations and many other enterprise systems) and a Low Energy Bluetooth binding (used by beacons and sensors) make it possible for hospitals, construction sites, laboratories, power plants and the smart grid, refineries, military as well as schools and offices to support learning-as-a-practice — the kind of learning-by-doing that is tied to competencies as well as outcomes.

We begin to realize a way of meshing learning and performance in a way that respects a person’s dignity and intelligence and is empowered by the digital capacity, rather than simply serving it.

Do you see something else, some other type of standard in the next five years coming out?

I see other standards emerging as part of xAPI that deal with how we describe and translate competencies, how we securely identify people, how we secure a given piece of data so it is only able to be used as intended… I can go on for a long while on just this question alone.

Lastly, if there is one thing people should takeaway from xAPI and LRSs, what would it be?

xAPI is a transformational idea. It began as a means for people to realize self-actualization at scale, and what began as a mission by a small group of us has grown beyond the community that created a technical specification — it is now inspiring industries to think differently about how to work better by being able to share and direct information outside of a given context. LRSs and the systems that make use of LRSs are a means to that achieving that end.

E-Learning 24/7

One comment

Comments are closed.