DLS Vernacular Reality Podcast episode four
04.08.2021

DLS Vernacular Reality Podcast: Guest Jordan Higgins

Vernacular Reality Podcast Ep. 4 Transcript

Blythe Collins: Welcome back to Vernacular Reality! I am back with Sean, as always, who is Head of Immersive Software at DLS, and today we have a guest. We have Jordan Higgins here with us. He is an XR designer and adjunct faculty at George Mason University School of Art. He’s here to talk to us about the VR scene in the area, VR innovations in general, and how he’s using VR to improve training.

So, let’s just start out with an intro to you and your work.

Jordan Higgins: Most of my background has been in user experience design and more traditional web design. About four years ago, I got my hands on the HoloLens, and the company that I was working for, we got really, really into it. You know, this idea that we could bring holographic content and 3D objects into the world with us in this sort of mixed reality space was very, very exciting to us. We saw this potential for spatial computing to change all sorts of things.

The company I was with became one of Microsoft’s first HoloLens agency partners. From there, we started working a lot with virtual reality. Four years ago, the technology was starting to get more accessible, easier to create content for, and it was just sort of this fantastic rush to find, ‘where does it really change things and deliver value in ways that it previously hasn’t?’

Aside from that, I also teach at George Mason University’s School of Art. I teach web design and usability and design thinking. One of the things that that we’re very big on right now is teaching Web XR – this idea that we can create immersive experiences that can be delivered on multiple devices in the web browser.

Sean McBeth: Yeah, you mentioned your background being in web development. That’s how I got started in VR, also. I had my own freelance consulting, doing web and database development and Google had announced Google Cardboard and I’d just happened to have some lenses laying around from some old photography projects. So, I just kind of wrapped them around my cell phone with some duct tape and the cardboard box they came in…

Higgins: Wait. Are you saying you built your own Google Cardboard? Like, from scratch?

McBeth: Yep, from scratch. I was watching the Google IO conference where they announced Google Cardboard, and I realized I could make this right now, while they were talking about it. And that’s how I made my first VR headset and immediately knew, ‘this is a big deal, this is what we need to be doing.’

Higgins: That is awesome. When I first started working with VR, it was actually before I started working with the web. It was, like, late 90’s and it was this platform called Alpha World, where you could go in and build virtual worlds in this sort of desktop client. Very surreal. Imagine a very primitive Minecraft type of world. But you know, it really highlighted that problem though. I think that was about the time that the first location-based VR experiences happened, right? Like Dactyl Nightmare. I was in school down in Southwest Virginia, and I remember driving about an hour to go to a mall that had the Dactyl Nightmare. It looked like something from the future – you put a big, giant headset on, and I remember, it was like 25 bucks or something for five minutes. I remember thinking that it was the most I had paid to ever get that sick. But these devices, they were thousands and thousands of dollars. And to create content for them was very, very arduous and time-consuming. Fast forward to that Google Cardboard announcement and that really was a pivotal game-changing moment. I remember seeking out a Google Cardboard after that, but you were able to actually go and build a VR headset out of found materials. I mean, that’s just amazing.

Collins: Perfect. So, bringing it to present day.

I know that Jordan has worked a lot with virtual reality for training as does Sean, which is what we talk about a lot on this podcast. So, Jordan if you could just cover that?

Higgins: So, at the company I used to work at, we had a subsidiary that specialized in using mixed reality and virtual reality to help train professional sports teams. Specifically, NFL quarterbacks and college football teams. Things that would let a professional athlete, use the tools that they’re used to using, like PowerPoint, Visio, and there’s also a custom desktop application to draw out plays and then be able to visualize them in the device. Basically, this was the idea of a wearable computer that you can throw in a backpack. So, anywhere you are, you could put it on and run through plays and visualize them at true scale with spatial audio. You could practice making completions, things like that. One of the early benefits that we just found broadly is that our brains are hardwired to experience the world in 3D. It helps us form memories faster. It helps us whenever we can combine multiple senses. Whether it’s audio and that sense of scale along with physical motions of using our hands or moving around a space. The more we can engage someone in multiple senses, then the faster training will be. We’re seeing this in the industry all over the place. There’s a lot of examples in manufacturing, in healthcare, in medical schools, in really anything that can activate that spatial component, you’re seeing faster times to get people performant it on a task. Then also, higher rates of retention down the road.

So, one of the things that we were doing in my last job was looking at how we could take those lessons that we learned from the professional sports world and apply them to DoD training to bring them into environments where there are similar sorts of challenges. Sometimes, high fidelity, physical immersive training, like building big simulators can be very expensive. They can be geographically locked, requiring a lot of people to travel to one place to do a simulation. That high level of fidelity, there’s no substitute for. But getting something that is, in DoD parlance, the 80% solution in a timely manner that still accelerates training that gets you more immersive and gets you more performant. That also has a ton of benefits down the road too.

I think one of the other things that we’re really excited about is that when we’re in these VR headsets or mixed reality headsets, we’re also generating spatial data. We’re generating data about people’s movements, eye-tracking, general performance data, time on task, successful completion rates.  All these traditional metrics that we would look at, combined with this new universe of spatial analytics. I think we’re just starting to tap into this wealth of potential insights that are out there that aren’t even fully tapped yet. But I think it’s changing fast.

McBeth: Yeah, it’s really interesting stuff. The boundaries for what we can do with all this data, in the training context, are just really huge.

Higgins: I had a chance to see some of what Sean’s working on with taking 360 environments and putting people directly into them and making those connections between the things that we’re saying and what we’re hearing in the actual environment that we’re in. I mean, Sean, I think you showed that to me two months ago, but just having an experience somewhat at scale even through the web browser, I have more of a sense of the space of that experience, and I imagine if I actually had done it inside of a headset then that would be an even stronger association with the content that you’re you’re conveying.

McBeth: Yeah, we’ve actually had a chance to run a few students through the application now and the feedback has been amazing. Our students are saying that they feel a much greater connection to what they’re talking about, they see details in the subject that they don’t think they would’ve noticed before, and it’s easier to think about this place as a real place when you’re in the immersive context. In language training, we talk about immersion. We talk about surrounding the student with nothing but the target language. We talk a lot about playing in that context of being within the culture. It’s not just about the language, it’s also learning about the culture that you are targeting. And by having this immersive environment, it makes it more real for the student and it gives them more motivation to learn. It’s more compelling, it’s more fun, and it just kind of sits in the brain better, I guess. Instead of trying to read about a place out of text and having no real concept of what you’re talking about, and trying to learn the words about that place, and trying to learn what it means to those people that you are discussing, you get to shortcut that whole cognitive process and just actually do it.

Higgins: Yeah. It’s optimized for the way that we learn, right? Like when we first learn languages, we are immersed in this very new world to us, and we’re seeing how language relates to people’s actions, to facial expressions, to physical objects in the environment. I remember from one of my cognitive psych classes being fascinated by the idea that our senses are constantly perceiving things even when we’re asleep. Our brains are receiving some sensory input from our ears from our sense of touch that’s constantly being processed, evaluated, and analyzed. Then, you have that sort of executive function in the brain that basically sorts through all the noise and decides what you need to know. Out of all of these possible data inputs that are coming in, what are the important things? So, even if you’re not consciously thinking, ‘OK, yeah, this person is wearing a ticket counter uniform,’ or, ‘this looks like a kiosk,’ that still contributes to the overall experience that your brain is processing and helps establish those patterns that become memories and associations.

What have the instructor said?

McBeth: The instructors have enjoyed it, as well. They really liked the ability to engage with the students on this much deeper level. They’re already seeing that the students are more engaged. All of those things that make the students better students make it a more enjoyable experience for the teacher too. You know, the teacher having better students gets to do better work.

Higgins: I hadn’t really thought about this before, but this whole idea of Zoom fatigue, right? Like, going from virtual meeting to virtual meeting to virtual meeting, and just kind of being tired from it. I’ve noticed that as my classes have moved online too, the students are feeling that it’s having a huge impact on them. I was doing a brainstorming workshop, specifically on ‘how do we improve the student experience?’ for my design thinking class. And it was amazing, we were doing an empathy map where we were learning how to put yourself in a user’s shoes and think about what they’re hearing, seeing, saying, stuff like that. But nothing anybody was saying was positive. I’ve definitely seen a bunch of empathy mapping exercises where it’s been more negative than positive. Usually, it’s in the context of trying to solve some problems, so that’s to be expected. Nothing positive at all, though. There’s so much fatigue from this remote, distanced, not face to face, but screens within screens ways of interacting with each other. That, I think, is where I’m really seeing a lot of potential for the classroom environment – being able to bring people into more of a 3D space to get more of that sense of scale. It seems like that’s something that sounds like your students would probably be really interested in, too.

McBeth: Yeah, well if you think about traditional teleconferencing software, it’s designed to replicate the conference room. So, the whole paradigm is centered around a single speaker projecting information at the other attendees. For a language instruction company, we’re about helping people communicate. When you go somewhere and you speak in that person’s native language, you are saying something even before you begin saying words. You’re saying, ‘I care about this conversation enough to have learned your language.’ So, all of those nonverbal cues are incredibly important to this task and it’s completely lost in traditional teleconferencing software. Whereas with virtual reality, you see an avatar of a person – you’re not seeing a real person – but you’re seeing a whole body, you’re seeing it in relation to your body, you’re seeing it in relation to an environment, and you’re surrounded in this environment. This environment has other sounds in it. We put sounds of birds in the background. If the scene has other people in it, we put background noise of a crowd there. So, it gives you that sense of being able to look at somebody, direct your attention at them.

How many times have you sat in a zoom meeting and felt self-conscious about whether or not you’re looking in the camera lens or at the screen?

Higgins: Too many. All the time.

McBeth: Right? In virtual reality, that’s gone. You just look at the person’s avatar. If somebody over your shoulder starts talking, because of the spatialized audio system, you hear it coming from the side. So, you no longer think about interacting with the software, you just do. You just use all of the conversation techniques you already have. We have motion tracking for the avatars. So, you can do things like shrug, you can shake your head, you can nod your head, and that has communication meaning that you’re not really going to get the same, on a zoom screen where maybe the user doesn’t even have their video turned on, or they have their camera setup to where it’s cutting off the bottom half of their face because laptop cameras are not set up perfectly.

Higgins: So, that actually makes me think about one of our early HoloLens concepts, where we were really interested in the ability for remote users to kind of share a holographic experience from across the country, across the world, wherever they happened to be. You know, having one person wearing a HoloLens on East Coast, another person wearing a HoloLens on West Coast, and looking at a 3D map open area (this was a military simulation thing that we were looking at). Traditionally, we were looking at things that they would do where they would build these big giant maps in gymnasiums and people would walk around the maps and it would require everybody to be in one place. It was really, really exciting to actually be able to have that holographic avatar of someone sharing a space. So, rather than having to get on a plane, fly across the country, take two or three days out of our schedule, you just put the device on and beam into this shared holographic world. And then being able to convey a lot of the same sort of interactions that make that meaningful, like being able to see where someone else is looking, being able to like you’re saying when somebody is talking, knowing that they’re behind you so that you turn around and look at them. Those little things you know add up into a more meaningful natural experience.

Avatar design was a whole other fascinating deep dive area that we went into. Especially in military environments, where you had people from different branches of the service and people have different ranks. In some of the early stuff we were doing, we just had one avatar. That was the generic avatar. We started to get into design studies of avatars where your branch of service was reflected in the uniform you were wearing, your rank was reflected in the type of uniform, and this was before we had more realistic- you know, at some point way down the road, we’ll have 3D volumetric video that perfectly matches you. We’re a ways away from that now being real-time but in the short term, there’s so much potential for those avatars to convey different aspects of culture. I was thinking about that in terms of military culture, but it seems like that would have other cultural implications, too, for the type of training you’re doing.

McBeth: It’s interesting, I suddenly had a flash of memory to first starting out on the Internet, being on message boards, and not having a photo of myself that I could use as a profile photo, because we didn’t have cell phones or smartphones yet with cameras in them. And today, we all have whatever photo we want up there. We change them to suit our feelings for the day. I think that’s one of the benefits of non-realistic avatars, is that it becomes another form of expression and it’s something that you can have another degree of control over in how you present yourself to people. We have the technology to do full 3D scans of people and it only takes a couple of minutes. Then, you can take that 3D scan of a person and you can use it as a 3D model, that you can then morph and control. As this technology grows and becomes more of a day-to-day portion of people’s lives, I think that’s something that people are going to want. I think people are going to want to be able to put themselves into the environment.

Higgins: Yeah. I mean that sort of a natural extension. We’ve seen this in game design for ages. People love to express themselves through their character design. It just seems like sort of a natural bridge into bringing that into more every day and training and productivity type of applications.

One of the things that I think was really effective about the platform we were working with was that we designed the scenario building tools to work in the language and manner that the subject matter experts, in this case like coaches or military leadership, using sort of the same tools and terminology that they were familiar with, to then automatically build the mixed reality or VR content. I think that’s been one of the huge barriers to adoption, right there. Like, the idea of building immersive training scenarios required a small game development team. It required 3D, it required programming, it required UX – all these sorts of different things that had to come into it. The more we could build a platform around a specific training need or a set of parameters, the easier it was to put the content creation tools into the hands of the experts that could build it.

McBeth: Yeah, you look back at the history of computing and you look at the early transition from text interfaces to graphical user interfaces and it didn’t come from the operating system from the start. If you wanted your software to have this graphical interface, you had to have graphics experts on your software team to be able to make it happen. We take it for granted today that you can have a button on the screen and it can be clicked on and it can fire an event that you can respond to. That stuff that any junior programmer can set up nowadays. We have software tools so that designers can often wire up these things on their own. But when it first started, it took knowing how to draw these things manually, knowing how to fill up a grid of numbers that would be interpreted as colors to draw a box with some text in it. Knowing how to do the math of reading mouse location and figuring out it was on top of the button and knowing how to then propagate that into an action. That’s kind of where we’ve been for the last five or six years of virtual reality. The system itself doesn’t provide a lot. It’s more on the development side to be able to make things happen, but we’re seeing change there, as well. A lot of new frameworks coming out for building software, even within the open-source world. People have been using Unity and tools like Unreal to build VR stuff that’s still more on the programming side.

Higgins: I think A-frame was a game-changer for me, just as somebody who had been building websites since the early days. Being able to create AR/VR content using HTML, CSS, JavaScript, it was a very familiar environment to start working in. And then, seeing of the stuff that we can do now with like three.js, that by itself is a huge game-changer.

In one of my most recent projects, we built a mobile augmented reality game for a government client, where you could actually scan a quarter- like a normal, everyday coin, and activate a 3D model that came to life and interacted with web page content. So, this is a type of experience that wouldn’t have even been conceivable to do this as a web app until fairly recently and doing all sorts of things on the actual mobile device. Previously, we would have had to build this in Unity or Unreal and deploy it as an app across multiple platforms. Being able to do that with web standards, it was a lot of three.js, a lot of WebGL, and built on the React framework. Just game-changer. I think that’s where we’re going to start to see a real push towards more consumer adoption, too. You know, that ability to create these experiences that don’t require downloading an app. Web designers and web developers and UX people will be able to build in the environments that they’re familiar with. It’s not learning something completely new from scratch, but it’s enhancing things that we’re already familiar with. That feels like it’s going to be a very rapid shift.

McBeth: Yeah, I think that’s why for the last few years, we’ve heard so much about ‘this VR stuffs not going to take off, it’s not relevant to the general people’ because up to now, everything has just been technology demos. It’s just been graphics and waving a controller around in the air. It’s not been connected to anything in the world and our world today is oriented around the web. Now that we can do VR on the web so easily, we can connect to the vast wealth of web services that are out there. We can start pulling in the data that you care about. I mean, that’s where the real value of every application lies, in the data that you have available to the application. The interactions that you design for that application are important. You’re going to give people a really bad time if you don’t get that right, but you’re not going to give them a great time if you don’t have something important for them to do once they’re there. I think that’s one of the biggest things that’s different about just the last year or two in the VR industry is just this ability to start actually building important applications because we finally have the tools to connect to all the data we have.

Higgins: Absolutely.

McBeth: What do you think are some of the more important developments going on right now?

Higgins: So, we just talked about web XR. Just last weekend, they held the first-ever WebXR Awards where they were recognizing a lot of great experiences in both developers and frameworks and entertainment experiences, educational experiences. That community alone has just exploded in the past year. So, I think we’re going to see a lot of new people, a lot of new ideas, a lot of people coming from outside of traditional designer/developer backgrounds. But, many people coming in with ideas to start to build them out and actually having the tools to be able to do it. I think that’s going to be huge.

The Oculus Quest, definitely. Lots of indicators about how, at that price point, it’s getting in more people’s hands and it’s becoming more a natural part of people’s computing experiences. This question can’t go without also including at least some speculation about whatever Apple is going to come out with. The latest now is that it’s going to be less smart glasses and more like a mixed reality headset that’s doing things like pass-through video. Between that and devices like Ureal and whatever Facebook’s cooking up. Samsung has a new smart glasses concept. There definitely seems to be that race for getting things on people’s faces to get them off of looking at these small little screens and more into seeing that digital content around them. I think it’s just going to be really, really interesting to see, ‘will consumers actually start to gel around a format, kind of the way that they did with the smartphone?’

A lot of this stuff has felt like what happened when Apple released the iPhone. Like, that transformed the industry because all of a sudden we started to think about things like ‘what does a mobile app do that the website doesn’t? Why would we build a mobile app?’ And then also, ‘what do we need to do to build mobile websites?’ That changed the whole web design and developer world. It’s going to be interesting to see, with whatever is coming out in the near term, what’s actually going to take us out of ‘well, yeah, in these very specialized specific-to an-industry applications, what actually becomes more of an actual platform that people start really building on and really gets out into it to broaden adoption?’

McBeth: Yeah. It’s interesting you brought up the iPhone and smartphones. I kind of have this mental model of ‘what smartphones really did for us was it freed up our feet, we are no longer tethered to our desks, we could go out into the world and still have a computing device with us.’ I think that’s why I’m really excited right now about hand tracking – getting away from motion controllers and going to just bare hands being tracked with our virtual reality devices. Now that we freed up our feet with smartphones, I want to free up our hands with hand tracking.

Higgins: Hand tracking has been really, really fascinating – seeing how much that’s kind of taken off and some of the things that people are doing with it now. I think I saw one demo where somebody was, like, tying knots using hand tracking. I mean, that just blows my mind. From spending years with the first-gen mixed reality things where we were just doing this- and for the podcast, I was just doing a tap gesture, where we’re like do a virtual tap in the air.

McBeth: Yeah, I don’t know if you remember this old comedy show Kids in the Hall. There was a character that would crush your head.

Higgins: Oh yeah, they’re crushing your head. Absolutely. I’m actually surprised that nobody built a head-crushing game for the HoloLens.

So now though, you’ve got much higher fidelity interactions that are possible. Actually, Snap’s Lens Studio– and I feel like this is probably worth talking about, right? Like, the SparkAR platform and the Snapchat lens platform have just empowered an entire generation of content creators that previously would have probably never considered augmented reality development, but they’ve built these fantastic tools for highly creative technologists and designers and content creators, people that are used to making videos for YouTube or Tik Tok or whatever, turning them into filter creators and augmented reality creators. Those platforms are amazingly democratizing. Like, it’s the ability to go from no skills at all to publishing your first Instagram or Snap filter very, very fast. It’s very empowering. What made me think about the Snap platform was that they just released an update, I think, that has really, really robust hand tracking to be able to track specific fingers and gestures and things like that. We’ve also got the accessibility community has also really, really moved very, very quickly, especially once it became part of the draft web standards for WebXR. The attention that’s getting paid and the conversations that are happening around, ‘how do we make this technology and these experiences more accessible?’ Very powerful to see that happening so quickly, too.

McBeth: Yeah, it’s very exciting times to be living in.

Higgins: And always great to talk about it with you. Like, every day there’s something new to update about or talk about.

Collins: Well, I’m happy that I could facilitate. It was nice to listen to two experts.

Higgins: What is your perception of the technology? Like, when you’ve seen VR experiences or augmented reality.

Collins: My main frame of reference is Snapchat filters or Instagram filters because that’s what it made me think of when you were describing the coin application. It made me think of Snapchat. I’ve never gotten into it, but I know how easy it is. I know that you can like submit your own and do it yourself, which I think is cool. It’s just interesting because I feel like if I am hearing about these innovations, then it’s pretty mainstream. Which is, I think, notable because now it’s like immersive software is for young people and for, you know, anyone who’s using these platforms, which is a lot of people. And not the typical person who would be throwing themselves into immersive software research, who, I don’t know, wouldn’t have that much background knowledge, anyway.

Higgins: Yeah.

Collins: I have learned a lot just from talking with Sean. Last episode, he outlined a bunch of different platforms, which I, of course, didn’t know existed, and it’s just so interesting how many different things VR and AR and mixed reality can be used for just like so many different topics! From, like, exercise to decorating the world around you or for training obviously, which is like the primary one that I’ve been learning about.

McBeth: I think that’s one of the frustrating things about being a practitioner in this field and watching the popular tech press talk about VR and only ever talk about games. It’s almost always coming with some sort of assertion that ‘VR has died again, we’re never going to hear about VR again,’ because of some game not selling a billion copies. That’s their metric of success and it’s not anything about that. It’s about really fitting computers into people’s lives better. Right now, we have people really talking about how much they are a slave to their machine and how they always have their heads in their smartphones, but with immersive technology, we can take that away completely. We can put you back into the world. Like, augmented reality. Augmented reality doesn’t even really necessarily have to have graphics. We could do augmented reality completely in an audio space, which is really fascinating place to study right now. When you have that, then you can be out into the world and interacting with things around you but not disconnected from halfway around the world. You can be more connected to the world, both your local immediate environment and the rest of the world, at the same time.

Higgins: So it’s very clear that at DLS you’re building the future of language training platforms with immersive in mind. What is the next stage in that?

McBeth: I’ve always tried to keep a design mantra in mind that we’re not here to build virtual reality, we’re here to enhance the teaching and the learning experience. So, in this particular case, virtual reality is a way to enhance that connection to culture. So, whatever it takes to do that, to continue to enhance that student and teacher bond. We’re looking at tools to give teachers to be able to create their own scenarios on the fly. I kind of think of it like being a wizard and being able to control the environment around you. You know, be able to make an apple appear right in front of you so you can talk about, ‘what’s the word for apple?’ Toss it around and talk about, ‘what is the word for throw?’ And actually throw the apple to the person. I mean, communication. We have teleconferencing built into the application. Anyway we can enhance that communication. Maybe it’s getting out of the lesson-oriented modality that we have right now and just having meeting rooms that you can pop into with a person.

Higgins: That’s interesting. It seems like there’s a whole new category of educational models that will gel around that, too. I think what makes that so effective though, or what I think is really strong about that is that it’s not focused on the technology, right? It’s very easy to get hung up on the universe of the latest and greatest headset or framework or whatever. With the focus on the practical application of it, it becomes that evolution of what you’re doing, rather than you know something that’s kind of a hammer looking for a nail.

McBeth: We’re also looking at being able to make these tools more available. Accessibility, as you mentioned before, that’s a big component of our development process. Being built on the web platform, we get desktop and tablet support. It’s one copy of the software that runs everywhere, which is a lot easier than how it was when I first started with Unity on this project. I started the project in Unity and then about a year ago, came back home to web development for the project. And looking towards using speech recognition and text-to-speech systems to give the user different ways of interacting, to provide more information between the student and the teacher. I have this interesting idea of using a speech recognition system from the target language to rate your pronunciation. If you can get the speech recognition system to understand you, then you’re probably saying words correctly.

Higgins: I’m horrified to think how my French accent right now would be evaluated after many, many years out of school.

McBeth: Well, thank you, Jordan. As always, it is a pleasure speaking with you.

Higgins: Thank you, you too.

Collins: Thank you Sean and thank you Jordan for visiting us virtually! Glad we could have you on.

For more DLS, check out other blogs and visit us on FacebookLinkedInInstagram, or Twitter!