Exploring digital and physical spaces
Once upon a time, we all used analogue tools. It didn’t matter what industry you are in. You got your work done using paper and pencil and the specialised tools of your trade. That started to change in 1944 with the invention of the first programmable digital computer. Now instead of doing calculations involved in large numbers by hand, you could use them on the computer provided that you were in the US Navy and you had a room the size of a movie theatre to work in.
Over the ensuing years, though computers started take on more and more tasks that were formally done on paper. They became smaller and more affordable and more flexible. They progressed from being used solely by academics, in military to commercial applications such as production, scheduling and airline reservations.
They also begin to take over different types of paper-based work, while the original computers were invented to do hard math problems. By the 1960s, they were starting to be used to display information on screens. This meant that they could eventually take over task like typing manuscript.
In 1973, the Xerox Alto introduced the first graphical user interface and this paves the way for computers to take on a lot of new work such as internal business communication via email and chat, drafting, type setting and design, technical diagramming as well as programming. All of these applications were part of the Xerox Alto and all of these tasks were originally done paper.
Instead of typing, drawing, physically cutting and pasting shapes on a drafting board, you could do it on the computer instead. People whose jobs relied primarily on paper-based activities became the new target market for information technology. But information technology was limited to information that could be effectively can conveyed on two-dimensional screens and it was also limited to people with stationary workflows. Personal computers started out as big bulky things as they require desks.
For the past 40 years, there’s been a barrier between the so-called knowledge workers, served by information technology. In terms of user research, some of the earliest usability tests were conducted by Bell Labs in the 1970s and they look nearly identical to moderated user test today. They were testing computer-based office systems.
As computers have evolved, we have logged these legacy design practices with us retrofitting them to work with new kinds of interfaces, rather than reinventing them. We rarely acknowledge that these methods are founded on the assumptions of a flat screen world.
But in the past few years, the form of computers has started to dramatically change. Chips, sensors and power sources have become compact and efficient enough to enable not just smartphones but a large number of new devices.
We now have smart speakers and hammers in our homes to sense and make sense of subtle changes in our environments. Virtual and augmented reality are starting to come into their own enabling contextualised volumetric visualization. The pandemic brought out a huge surge of interest in robotics and the healthcare and hospitality industry.
One of the first major mainstream IoT devices was the Roomba all the way back in 2002. In 2006, Nintendo launched the Wii with a motion control system that helped so more than 100 million units. The Microsoft connect offered more advanced motion tracking and it was hackable.
It sold 1,000,000 units within 2 weeks of its launch for years later in 2010. So we’ve been gradually trending away from screens for a while. But now with new smart devices and augmented reality, the pace of this change is accelerating. The barrier between knowledge workers and everyone else is about to come crashing down and this changes the way we design technology.
The new paradigm is one of dimensional information one where we need to understand the physical space as we occupy and the data that informs and describes these faces. Our old paradigm was one of disembodied data. The new one has metadata intertwined with real world spaces. Before we had cyberspace which was space as metaphor only. Now, we are creating true physical digital spaces.
Understanding physical-digital space
There are lot of buzzwords right now being used to talk about the integration of information technology with physical spaces. Thanks to Facebook everyone is talking about the metaverse, but before that it was smart spaces or the intelligent. If they are referring more to the processing, they might talk about edge or fog computing. If they are referring more to the interface then they talk about immersive or spatial or ubiquitous computing. In production industries, it usually just referred to as automation or maybe industrial IoT.
You should also be aware that a lot of this discussion is happening at the level of individual devices and tools such as computer vision and blockchain, rather than looking at the ecosystem and aggregate. The real-world physical spaces are integrated with digital metadata specific to those spaces and objects within them.
Examples:
- Chevron’s digital twin oil fields: A digital twin instance or DTI is a virtual representation of a physical object, usually a machine or a building. It can include a 3D model, position and historical data, as well as all relevant information about that object’s current status which is collected by various systems and sensors. The state of the virtual object corresponds to see the real one. Digital twins have their origins and outer space exploration. But these days, they are starting to use more industrial applications. You will see energy companies like Chevron using digital twins of high value equipment and oil fields to model and react to conditions that could impact how those machines are functioning.
- Disney World: Disney Genie is the new replacement for the Fastpass system they used to have at their parks. As a park goer, you can sign up for this service to get real time information about wait times arise, as well as personalized recommendations about when to hit attractions on your itinerary.
- Field with technicians using Microsoft HoloLens: One of these cases for the HoloLens turned out to be notices for field workers. In these scenarios, a worker can be out on location wearing a HoloLens. While it beams back everything, they are seeing to an expert thousands of miles away. That expert can actually make annotations that will show up on objects in the field, the field of view of the worker. So, it’s a way of bringing that expertise in the workspace in a contextualized way.
- Your own front yard: It is not true that physical digital spaces are only accessible to large companies. If you have a smart home device, especially more than one, chances are you living in a physical digital space right now. Doorbell cameras offer an easy example of this. So, if you have a ring or a wise camera, then you know it’s not just passive recording device. It actually understands what is happening on your doorstep and can take action if it sees images of people, pets or packages.