Speexx Exchange Podcast – Episode 22:
Hybrid Teams: The Future is Part Human, Part Machine with Greg Detre

Designing the Learning Experience

Play Episode

Episode 22 Play Button

Episode 22

[fusion_soundcloud url=”https://soundcloud.com/speexxexchange/ep-22-hybrid-teams-the-future-is-part-human-part-machine” layout=”classic” comments=”no” show_related=”no” show_user=”no” auto_play=”no” color=”#ff7700″ width=”100%” height=”100%” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” /]

Welcome to the Speexx Exchange podcast about “Hybrid Teams – The Future is Part Human, Part Machine”! We’re in “the age of the centaur,” where the best teams (like the mythological centaur – half man, half horse) are made up of two parts: part human, part machine. Data consultant and former Channel 4 Chief Data Scientist, Greg Detre, speaks with Donald about the interchange between people and machines, starting with the identification and recognition of skills. From there, they discuss how to trim these insights down to a manageable set of resources (akin to data visualization) and make teams more efficient and effective in the workplace. Tune in to find out more!

itunes podcast icon
stitcher podcast icon
spotify podcast icon
youtube podcast icon
google play podcast icon
soundcloud podcast icon
Check out all Episodes
Go to the Next Episode

Intro 0:01   
Welcome to the Speexx Exchange podcast with your host Donald Taylor. As a renowned learning and development industry expert, as well as chairman of the Learning and Performance Institute, Donald sits down with experts from around the globe to talk business communication, learning technology, language, digital transformation, and engaging upskilling and reskilling your organization. This podcast is brought to you by Speexx the first intelligent language learning platform for the digital workplace. Listen in and you might learn a thing or two.  

Donald Taylor 0:35   
Welcome to this episode of the Speexx Exchange podcast with me, your host, Donald Taylor. Today our guest is Greg Detre, the former chief data scientist at Channel 4 and among other things at the moment, a data consultant. Greg, great to have you with us.  

Greg Detre 0:50   
Thanks, Donald! Lovely to be here.  

Donald Taylor 0:51   
I don’t like to go into a huge introduction of people when they can do it so much better themselves. Greg, could you introduce yourself? Where you’re coming from professionally speaking?  

Greg Detre 0:59   
Well, I trained as a computational neuroscientist studying why we forget things and they say that psychologists study their own deficiencies and so I have a terrible memory. On the back of that, I co-founded a startup called Memrise. I’ve been working in the startup world for the last few years as a mix of CTO, and data scientist, most recently working at Channel 4 as chief data scientist there. And now, I help startups and larger companies to get the most out of their data teams and out of the data they’re collecting to hire the right people and manage them in a way that helps them tackle interesting problems effectively.  

Donald Taylor 1:35   
Data is everywhere at the moment. I was doing a talk this morning; you can’t get away from it. I threw up a slide that showed all these magazine front pages from probably about eight or nine years ago, but it was all about big data. Then I said what happened to big data? It sort of went away as a buzz term, but then the use of data entered our daily lives. I illustrated that by just throwing up some of the stats that we bandy around in the world of football every day. Because if you’re a football fan, you’re suddenly used to heat maps, assists, the number of miles run during the course of a game, and so on. Things that we never talked about when I was a kid growing up and now it’s part of daily life. So, we’re already at a phase where data is part of what we do. So, what’s the future? Is the future entirely machine-led? Or is it human? Or is it something else?  

Greg Detre 2:19   
I grappled with this question a few times at The Guardian and Channel 4, especially. Where both companies are companies that put a lot of weight on words. Being the head of the data science team or the chief data scientists, a team of people who care about numbers, there’s a real sense in which it would be easy to talk past one another. I remember, for instance, one project at Channel 4, where we were trying to help them improve their forecasting and for the last 15 years, it’s been done by an expert team that understands the nuance of television, of audiences, of everything that they need to know as well as being quite conservative. They were doing a super job of forecasting the audiences over the next few weeks, but we had a feeling that perhaps we could be doing something with machine learning that could improve on that. You can imagine naively saying, well, okay, instead of having humans do it, we’ll get the machines to do it and they’ll do a much better job. I never for a moment believed that was going to be true because these experts have been doing this for 15 years and were doing it well. Sure enough, when we tried it first with machine learning, the machine learning did a pretty good job, but there was no way that we were going to be able to just say, oh, we should do it entirely in an automated fashion. In fact, I think there’s an interesting lesson here, that the kinds of mistakes the humans made, and the kinds of mistakes that the machines made were different, they were complimentary. It started to dawn on us that maybe a kind of hybrid team that took the best of both worlds could be the answer. Indeed, it’s a lot less threatening as a message to say, well, I think we can improve on what the humans are doing, and I think we can improve on what the machines are doing, and the result will be better with both involved.  

Donald Taylor 4:02   
I think most human beings would be very happy with the idea that machines make mistakes, possibly less happy with the idea they make mistakes themselves. So, how do you phrase that? How do you put it to a team that you’re now part of something better, because you’ve got machines helping you?  

Greg Detre 4:15   
Well, it’s a delicate conversation. I think you need to start from a place of admitting that people make mistakes. Fortunately, with that team, they were smart, and they were experienced, and they very carefully measured their mistakes. So, that’s a good place to start. Of course, you need to be in a psychologically safe environment whenever you’re considering any kind of innovation. But, well, I told them a story. So, if you like, I’ll tell you the same story and you can see if you find it at all convincing. So, this story starts 20 years ago, with Garry Kasparov, the great chess player at the time and one of the greatest ever. In 1997 he was beaten by Deep Blue at chess, and we might have thought at that moment, well that’s it for us monkeys, right? Sure enough, the best chess player in the world now, well, it’s not a human being. But interestingly, for 20 years, the best chess player in the world was not a machine either. The best chess player in the world is what they call a centaur team, a hybrid. That is to say, a good, smart chess-playing human with access to a big computer and a big database, and that combination outperformed just a big computer on its own.  

Donald Taylor 5:21   
Just to be clear, a centaur is a half-man, half-horse. If anybody’s not familiar with Greek mythology. I’m assuming, of course, that the machine bit is the legs, and the human remains the brains and the thinking bit up top. Let’s not get too confused there.  

Greg Detre 5:39   
That was exactly one of the things I said to the Channel 4 forecasting team. That, like in this image, the machine is the ass end of the centaur. So, you know, it may not be a 50/50 split. It may be that it starts off where it’s 95% humans and just a little bit of machine. Or in the case of chess at this point, I think the machines pretty much have it sewn up. But it took 20 years, even for the most black and white, kind of deterministic, perfect information scenario, for the machines to eventually kind of say, okay, we can dominate this game. For almost every other problem that we tackle, it’s interesting especially in the knowledge economy or involving creativity, it’s so much more nuanced, so much grayer, that I would expect that this age of the centaur in which we’re entering, where the best teams are a hybrid, that that is going to last for a very long time for almost all the interesting problems that we have.  

Donald Taylor 6:35   
I love the age of the centaur as a term, I think it’s fantastic. You tackled this, didn’t you at Channel 4, when you were running the learning development team there? You wanted to get great materials, and you found that it was just an almost well Sisyphean task if we’re going to keep on using mythological terms. It was something that just seemed never to stop a huge task, to collect all the information you wanted for your team.  

Greg Detre 6:56   
Yeah, so I wasn’t running learning development, but I was very involved in that problem for my data science team. Exactly, as you say, trying to think about how to find the best materials for training them. So yeah, I’ll tell you a tiny bit about the kinds of problems I’m working on now that are quite related. What we’re trying to do with a company I work very closely with called Filtered, is they’re trying to apply this idea of a centaur tool exactly as you say. So, where you have 1000s or even hundreds of 1000s of podcasts and articles and webinars, and God knows what else on a variety of different topics, and you say, well, we know that we want to teach our employees about a variety of skills. Whether it’s having difficult conversations, or how to visualize a time series, right? So soft skills, the hard skills, we’ve got all these skills that we want to teach them. How can we pick which of the learning materials that we have access to that would be best appropriate to each of those? This is an ideal task for a centaur because you could try and do it with just machine learning, and machine learning will, you know, chomp through 100,000 documents, lickety-split, but it won’t do that great a job unless you’ve got smart human beings who are both creating the training set and then refining the results in response. So, what you end up with is the ball kind of going back and forth between the humans and the machines so often that you end up with something that you could not have produced with either humans or machines in isolation.  

Donald Taylor 8:21   
This was the great promise of the internet, or rather, the World Wide Web dawning at about, I don’t know, not when it was first deposited, or even probably in the mid-1990s, but towards the end of the 90s, beginning of the 2000s. When it reached general consciousness, people were saying, my goodness, we now have access to the world’s information, this is fabulous, we’ll be able to find exactly what we need. But of course, the problem is when you have all that information, it’s the sea that you’re swimming in. The question is, how do you find it, it’s not even a needle in a haystack, it’s an atom in a haystack. From all that information, how do you find the stuff that’s useful to you? So, you’ve got this hybrid team and you’ve got a machine, you’ve got people. There’s you said, a ball going backward and forwards? Can you talk us through the process of who does what? What are the machines good at? What are the people good at in terms of finding a way through this haystack to find that one small thing that’s going to be what you do want?  

Greg Detre 9:12   
Well, let’s take a concrete example. Let’s take the example that Channel 4 had to solve with recommendations and personalization of the homepage. You’ve probably been to channelfour.com, or indeed any one of Netflix and a million others and they will customize what you see based on what you’ve viewed and enjoyed in the past. Okay, so far, so good. Now there’s a tradeoff here though, because if we hand this job over entirely to the machines, then firstly, we may miss out on recommending things that we want people to watch, or that we may miss out on the chance to express our brand’s voice. We may miss out on the chance to ensure that there’s sufficient diversity or in Channel 4’s case that the remit to promote experimental, innovative, or pluralistic content is being met.  

Donald Taylor 9:59   
That’s part of the job of Channel 4 explicitly, they’re told that they should do, or that’s their mission anyway.  

Greg Detre 10:02   
Exactly, the government has literally said, by law, this is your job. So, to do all those things, it’s almost impossible to really train a machine at this stage to take into account that kind of multitude of different factors. But at the same time to have human editors doing that job, well, they do a great job of it, but they can’t possibly customize 16 million or 20 million different home pages. So, there’s a tradeoff there. What we ended up with was a situation where the machines had divided the Channel 4 audience into a few different clumps. They’d assigned different editors to different clumps of users. The editors had created what we call slices, so kind of groups of content, that might be about fast cars, or around a particular kind of DIY content, or some other kind of clump of content they knew would appeal to a particular group of people. Then the machines would decide, okay, great you know, within that for you, Dawn, I think we’re going to recommend this particular Anthony Bourdain episode or whatever.  

Donald Taylor 10:04   
That’s spot on. I love Anthony Bourdain when he’s going through South America giving us all those fabulous recipes. You’re right as a human being, but I’m sure the algorithm would have found it too interesting, okay.  

Greg Detre 11:13   
So, what you’re doing then is the editors might create the clump of content, the algorithm might tweak it for an individual person. What you end up with, as a result, is something that’s both scalable, but also nuanced. That’s the trade-off, right? Because machine learning usually can scale but lacks a kind of human nuance that makes it either meaningful or feel like it has a distinctive voice.  

Donald Taylor 11:38   
Now, this is fascinating, and I totally get it. So, you’ve got the algorithm able to scale, the machine able to scale, we’ve got the human beings providing the nuance, the detail. It’s the opposite of the metaphor I had in my head. The metaphor I had in my head was Michelangelo in his studio with a block of marble saying to his minions, well go and carve the bits off the edges of the block of marble, when it’s about halfway there, I’ll take over and I’ll do the eyelashes and the kneecaps or whatever, you know, you do the fine detail. It’s the other way around almost. It’s as if the human beings, you say, the clumps are categorizing things, but then the scaling of that down to the fine detail for each person has been done by the machines because that’s the only way you could do that at scale. You couldn’t possibly have that individual scaling done by human beings. How do you make sure that that teamwork works properly? So, at what point does the editor give up the job of saying, well, that’s the collection of programs that fit here and the machine takes over and says, now we’re going to cut it for Donald Taylor or whoever?  

Greg Detre 12:38   
Yeah, and I think that’s where it ends up being probably a different answer for every problem. You need great communication between your human domain experts and your data science team. I don’t know a way to ensure that this can work well, other than by having built relationships up at first, creating an environment of safety, where each is respecting the expertise and value that the other is bringing, and a sort of relatively firm commitment to trying to get to what’s best for the organization. Usually, you need some way of measuring whether you’re doing a good job. So that you can say, oh, great when we add in a little bit of expertise from the human that helps here, when we try and do this bit with the algorithm that bit’s not working very well, oh, now it is. If you don’t have a way of scoring things quantitively and relatively objectively, then you’re probably dead in the water with this kind of approach.  

Donald Taylor 13:30   
If we have two teams, which are both effectively medical people, one looking after the machine side, the other looking after the human expertise side of it. They are people and unless there’s some objective measure on the outside, saying you’re heading in the right direction or not, they’re likely to believe that they are, and you need to pull them back on track. As well as that, of course, as you say the need for psychological safety. Without naming any names, and maybe it’s never happened to you, but have you ever come across the issue whereby that communication broke down? It didn’t work out, well, for whatever reason? If you didn’t, that’s fine.   

Greg Detre 14:01   
I mean, in some sense, that’s the default. The default is that projects seem promising and then for whatever reason, you have a bunch of meetings, and somehow you just can’t quite get the buy-in. Whether it’s from the domain experts or from the person that’s going to sign off the budget or, I mean, that’s why I have a job as a data consultant because more often than not, it’s very hard to fuse these two kinds of quite disparate approaches.  

Donald Taylor 14:28   
Going back to this idea of content for learning. A lot of people listening to this podcast are people who are particularly focused on learning. It’s a bit like channels and content in your broadcast situation with Channel 4, but you’ve got a lot more different types of stuff. I mean, just in terms of medium, you’ll have PDFs, you’ll have PowerPoints, you’ll have audio, you’ll have video, you’ll have text, you’ll also have things which are different lengths, different styles, different formats, and considering different topics. It’s a much more complicated set of things. Quite possibly also, you’ll have just a lot more of it, you’ll certainly have 10s, you may have hundreds of 1000s of items. How much does that complicate things?   

Greg Detre 15:09   
Well, it definitely does. I suppose, one quick and dirty answer is, well, let’s translate everything into just text. So, whether it started out as a richly produced YouTube video or a podcast, ultimately, we can re-represent it as just a script. Or maybe there’s a short description, like a summary or an excerpt that there’s been provided. So usually, machine learning needs something like that to work on. You can’t so easily feed it raw materials, rich raw materials like you’re describing, for it to make sense of. So, we kind of translate everything into a universal vessel domain of text and then filter it with the help of learning and development professionals. We have a bunch of domain experts who are good at thinking about what are the needs of large companies. What are the skill sets their employees need to develop? We worked with them to try to first build up a data set of examples of where the different learning materials have been correctly categorized, where the human being does that categorization. Then over time you try and hand over a little bit more of the job to the algorithm. But you’re always going to need a human in the loop to keep refining things to realize that the skills framework that you devise might have changed over time because remote working is suddenly really big in 2020. So, you’re constantly evolving with that kind of human nuanced kind of meta-level judgment, like judgment about the whole judgment about the project itself at a higher level. Whereas the algorithms just basically being like, okay, you give me a learning material, and I’ll tell you which skill it is. That’s all I do, right? It’s not busily thinking about long-term trends, or the fact that there’s a pandemic, and that might change things, just completely oblivious.  

Donald Taylor 16:53   
It’s following the rules and anybody who’s ever at the most basic level tried to write a process or I don’t know, an excel formula, or a computer program, will know that you’re convinced you’ve got it right. You put some variables in, and you get some unexpected results, and you have to go back and refine it. That’s at a very basic level, echoing what you’re saying. So, the human being sorry, Greg.  

Greg Detre 17:13   
Well, yeah, exactly, as you were saying. So, I think, to take it up a level to say, okay, this isn’t yet really a hybrid data team, what I described, this is more like, okay, human being provides a training set, hands it to an algorithm. Where we want to go, where they’re starting to feel a bit more like a hybrid joined-up team, would be if the algorithm starts helping with the definition of the skills, right? The algorithm starts noticing where there are gaps and if the human then is using that. So, it’s almost a little bit like a tennis match. Where kind of the quicker the ball is going back and forth between them or if they’re standing by the net, and they’re just volleying it back and forth. To the point where you kind of can’t quite see exactly where the human left off and the machine took over. That’s when I’ve seen that work well. That’s when you can end up producing high-quality output at scale.  

Donald Taylor 18:06   
Perhaps they’re juggling together, and it’s collaborative, rather than tennis where they are in competition with each other. Let’s go with that. Okay, so we can’t anthropomorphize the algorithm. It’s not thinking about anything. So how does it help categorize the skills? What does it do? Perhaps you set it up to do that, and then it runs off and does it? 

Greg Detre 18:28   
Well, so we could have quite a lot of fun talking about machine learning. I don’t know if anybody else will enjoy it as much as I, but I suppose that’s a big topic. I think in the case we’ve filtered; they’ve devised quite a clever system that’s based a little bit on trying to extract the most important and salient words and keywords and phrases that are indicative of particular kinds of documents, particular kinds of skills. So that’s something that both the machines and the people can kind of converse, in the language of what are the most important and salient keywords and phrases, and they’re doing a bunch of other clever stuff layered on top. But that’s like one example. I mean, if you wanted, we could talk a little bit more about writing and creativity in general, and sort of imagine where this is going in the future. Because so far, this is all real right here and now. But in 10 years, I think it’s going to look very different.  

Donald Taylor 19:18   
I mean, that’s a big question. We talked at the beginning about how data had been a lot of noise about eight years ago. Now it’s part of our daily lives. Is it going to insinuate itself further into our lives? Are there any things it can’t do? What would you see it doing eventually, in terms of creativity, which is the ultimate Bastion? Isn’t it the white-collar worker? No, nobody can write something like me, for goodness’ sake. Are you going to tell me now that it’s possible for somebody else to write Shakespeare’s sonnets?  

Greg Detre 19:47   
If we start with where we are, and then try and imagine, say 10 or 20 years into the future, it’s very hard to give great intuitions about what’s currently possible with machine learning. But my best suggestion to you is, if it’s something that a human being can currently do in under half a second or a second, then probably we can teach a machine-learning algorithm to do it as well. So, for instance, machine learning can recognize different kinds of flowers by looking at them. Machine learning can translate between French and English, or indeed, you know write a transcript of what you’re saying. These are all examples of things that humans can do so quickly; it almost seems to take us no time at all. But anything that takes you sort of three hours of head-scratching, to like, figure out exactly the right way to present some point to a board, that’s not something that we’re going to see from a machine anytime soon. In fact, I’m not sure that it’s something that we’re going to see from machines anytime in the next 10 or 20 years. In practice, where I see this going, if we return to our idea of the age of the centaur, let’s think about writing since that was the example you used. We might ask the question, well is an AI going to write an award-winning screenplay? Well, probably not, because I think there’s still just too much humanity, too much of the sort of sense of like, what is hunger when you kind of need human physiology to know what hunger feels like, to be able to write in a compelling way. That said, just as the best chess players in the world for the last 20 years, aren’t humans or machines, they’re centaur teams. I think the best writers in the world in 20 years may not be machines. They may not be humans, either. They might be centaur teams; they might be great human writers with access to interesting tools. I suppose we could think of some examples. So, imagine, I was thinking about a Channel 4 show involving detective Dearing, who’s sort of foul-mouthed and hilarious. Let’s say you’re writing your scripts, and you say, detective Dearing says, right, I’m going home now. I think the algorithm could kind of go, neah, our characterization detector says that that doesn’t sound like her. It’s not sufficiently differentiated from everybody else in the cast. And of course, in practice, she’s much more like, say, I’m going home for a shave shampoo and something that begins with “sha”. That’s much more in character for her. So, you could imagine how the tone of voice could be noticed and you could start to sort of see that these two characters perhaps are too similar to one another. You might get clues or help or nudges from an algorithm that helps you differentiate.  

Donald Taylor 22:18   
It’s extraordinary, I think about Coronation Street, which was originally conceived and written by a young man who’s 20, Stoney Warren. He had a fabulous ear for dialogue because he’d grown up largely sitting under the table in the kitchen, listening to his mom and his neighbors talking. All of the pithy dialogue that was part of working-class life in that street that was represented in Coronation Street, came out of his head. Of course, now what we’re saying is, well, we can probably not rely on one person having, let’s say, 20 years of experience of that before they start writing it, but we can have algorithms pick it up and tell us where we need to tweak the script. And we’re back to Michelangelo and like David. So, the nuance is being added here by the people, but somebody else, in this case, the machine is detecting where the work needs to be done. Could it also detect where a story arc might not be going correctly and where there’s a gap in the plot? Or is that too much?  

Greg Detre 23:11   
Well, so it’s hard to know. I think the story arc is a good one. We have an intuitive sense that a story arc, you know if we think about the hero’s journey, there’s often some kind of descent into chaos and disorder, and then maybe there’ll be some redemption at the very end. If we didn’t get our kind of satisfying emotional gesturing, if we don’t get that sense of an arc of things landing, of our emotions going on a particular shaped journey, then we don’t feel that satisfaction you know, the hallmark of a great story. It seems intuitive to me to imagine that we might be able to visualize those story arcs with the help of an algorithm. So, you might be able to sort of see, you know, what, that hasn’t quite landed or there’s a loose end there that we’re missing. So, to be able to see them both, you know, at the level of a scene and maybe at the level of a series, those are tools that might enable an already great human writer to do an even better job.  

Donald Taylor 23:16   
That’s what it comes down to, because ultimately, any tool as soon as it becomes cheap enough, becomes not a differentiator, but a commodity. Because if you can get an algorithm to do something for one scriptwriting team, other teams will be using it and the difference will be provided not by the algorithm, probably but by the people sitting on top of it, the heads of the centaurs leading the teams. Is that fair enough?  

Greg Detre 24:25   
I think so. I guess it’s worth saying that we can expect this to evolve. It’s not going to be a static scenario. The proportion of humans and machines in the centaur is going to change for different tasks at different rates. But the further we are from something being the kind of thing that you can do in half a second, the slower I’d imagine the machines are going to start to be able to get at the meat of it.  

Donald Taylor 24:46   
It’s slightly mind-blowing to think of the centaur age, I love the term. It does encapsulate for me where we’re heading and also, of course, not just where we’re heading, but what we’re actually doing right now so much very often without even thinking about it. Gregg, thanks so much for coming on the show. I’m going to ask you two more questions, which we always wrap up with. Could be a very long answer, but what do you wish you’d known when you started in the whole world of learning, your time gently learning? And the second one is, what are you curious about right now? Which I’m going to have to restrict you on because I suspect you’re very curious about several things. So, what do you wish you’d known when you started in learning and development?  

Greg Detre 25:22   
I was fixated as an x-scientist on efficacy, on how quickly I could help someone to learn, and how well that they would retain what they had learned. In practice, just like in physics, how far we get is a function of how fast we go, and how long for. How much we end up learning depends not just on how fast we learn, but also on how long we stick with it. So, in other words, the engagement and the motivation and keeping people going at it ends up being more important than just how well it works. So, I’ve ended up spending a lot more time thinking about behavior change and making something feel good to use, than making it as efficient as possible, even if it’s sort of a miserable and tiring experience.  

Donald Taylor 25:28   
Ideally, it’s both of course. You have the motivation to start, the motivation to continue, and these are maintained by the fact that you’re learning and that it’s a good experience.  

Greg Detre 26:16   
So that’s exactly right. Yeah, my second lesson, what I’m really curious about. So perhaps it’s related, I’m kind of obsessed with tools for augmenting our own brains. One of the things that we know about ourselves is that we evolved to move around in a three-dimensional world in spatial navigation and that we’re very embodied, right? That we aren’t just brains in a vat, that we are brains in a body, and that the body affects the brain in the brain affects the body. Well, this I think, is starting to ensure it, and it’s going to become cliched. I think what’s interesting is the degree to which, the way that we use our body helps us think and that virtual reality because it offers an embodied interface potentially might be a dramatically more effective medium for thinking than typing on a keyboard, which is almost like thinking laparoscopically. You know, like those keyhole surgery cameras, right? Like I’ve got these little fingers and the only things that are moving are my fingertips and the rest of my body is held completely still. I’m not making use of the gigantic tracts of the brain that are involved in thinking about where things are in the world and moving and motor behavior and moving my arms around and figuring out where stuff is relative and visualizing. We’re not using any of that. So, I imagine a world in which we think in virtual reality, and that we write, and we strategize and plan using a kind of embodied environment for our own garden of the mind that we can manipulate.  

Donald Taylor 27:42   
That’s making me think about a book called “The Singing Neanderthals” by Steven Mithen, in which he posits that originally, he uses the avatars as a sort of word to cover all premodern human forms, but originally as now trials and achievements and as other forms. We had a variety of ways of expressing ourselves, including singing including motion and including other things as well. Those have been reduced into primarily the text slash language-based form of communication we focus on now. It may be that what you’re suggesting there Greg, is that using technology we can get back to the more fundamental way of communicating and thinking. I would love to have you come back to the podcast and share with us in the future what you’ve discovered on that journey. But for the moment, my mind is sufficiently blown just from the conversation we’ve had today. I want to say thank you so much for coming on the Speexx Exchange podcast.  

Greg Detre 28:04   
Such a pleasure, thanks!  

About Donald Taylor

Donald Taylor

Chairman of the Learning and Performance Institute since 2010, his background ranges from training delivery to managing director and vice-president positions in software companies. Donald took his own internet-based training business from concept to trade sale in 2001 and has been a company director during several other acquisitions. Now based in London, he has lived and traveled extensively outside the UK and now travels regularly internationally to consult and speak about workplace learning.

Read more

About Greg Detre

Greg Detre

Greg Detre is a data consultant and former Channel 4 Chief Data Scientist who works with fast-growing startups and larger companies as an advisor and coach, helping them build and develop their data/technology teams. Currently, he is a board advisor and data/engineering leadership coach for Filtered.com, advisory CTO for Heights (yourheights.com), board member of Advisors for Calm Island, and machine learning advisor for Captur.

Read more

Would you like to test Speexx?

Try us

More Speexx Resources

Learning Experience Design

Learning Experience Design

While access to revolutionary technology in learning is at an all-time high, many organizations still choose to implement archaic and ineffective tools for training and development. One way to close this gap for the modern learner is to focus on the entire learning experience using learning experience design principles.

Download
Putting Humanity Back Into HR

Putting Humanity Back Into HR

It is time to develop a strategic view. Download this whitepaper and learn how you can support your teams during this accelerated digital transformation, providing them with the necessary skills to successfully adapt to the new way of working.
humanity

Download
Beat the Forgetting Curve

Boosting the Business Impact of Learning

A strategic approach to employee experience must combine data with human, soft skills. Only when people are able to communicate effectively with other people across teams and borders, will HR and L&D be able to demonstrate a measurable business impact of learning.

Download