Last morning, I had an interesting exchange of thoughts over twitter with the makers of Roost, the plugin providers for push notifications on this blog.
And since then, a number of thoughts have crossed my mind regarding where we are headed with regard to everyday computing, how the next generations will view the desktop, and how we are consumers of information are moving towards becoming collaborative producers of knowledge that is contained in the internet.
One of the dominant trends in the past decade has been the emergence of the smartphone. Although, we saw a series of devices that marked the start of the new era in technology like the Palm Treo and the Blackberry, the biggest trigger of change happened in June 2007 with the launch of the iPhone from Apple.
The earlier smartphones, were basically devices that people would carry in order to remain in touch with their work lives. They provided mobile access to emails, and when you were really stuck, you could get into a website and use a virtual mouse on the tiny mobile screen to click through the menus and do complete what you wanted to do, when you really did not have a desktop close by. The mobile had begun to provide tethering options for the laptop computer, so if you had a choice, you would probably like to hook up your phone to the laptop and have a go at whatever you were doing, with snail-paced 2 or 2.5G mobile connectivity.
What the iPhone did was provide an environment for the user to be able to download apps from the App store, and use them according to personal preferences. The device, as touted by Apple, was designed keeping in mind end-users who’d be old, not tech-savvy and did not want to spend time in learning how to use a smart phone. In fact, it was the first time, a hi-technology gadget was being introduced that did not carry a user’s manual. (I wonder when the Europeans would learn from that, and do something about the gizmos they add in their cars).
Getting ahead, we saw an evolution of the mobile phone happen across different platforms, with the introduction of Android, the touch screen started getting ubiquitous, and slowly and steadily the adoption of the multi-touch gorilla glass interface became common between most manufacturers who were either serving Google Android to their customers, or using their own proprietary operating systems, i.e. Nokia that shipped Windows, or Blackberry.
What happens with applications is, the user gets absolute control of his or her content with the environment of whosoever is hosting the application. So while on a desktop, you’d need to first transfer your photographs to your computer, then login to Facebook and share with your friends (after figuring out how to do so), on a mobile phone which already has a camera, the sharing is easy using the Facebook App, and not to mention instantaneous. Add to it, you can share a photograph with other applications like twitter, or send it directly to your friends using a messaging application like WhatsApp.
And then, developers get the extra edge in writing applications for the mobile. Take Instagram, for example. The application rides a layer over your mobile phone’s standard camera application and allows you to tinker with your photographs, while running complex processes to retouch or filter photographs which on a desktop would be the forte of an experienced user knowing Photoshop. When the retouched photographs have been shared via your social networks, it becomes a fad, a culture. Coupled with the location information you share, your choices become the source of valuable information for tourism, retail and your preferences for viewing advertisements.
But what do you do, when you have 50 frequently used applications on your mobile phone – which include some 20 applications for the news, perhaps 10 different messengers, games that you play during your free time ranging from Farmville to Candy Crush to Scrabble. It could have been very difficult keeping pace with all your applications, had it not been for push notifications. Yes, this nifty little feature of applications on the mobile, allows your applications to communicate with you, without even accessing them. For example, I rarely go about opening my news applications on the mobile phone, but I am regularly updated on current affairs, as they happen, with the push notifications that I receive the entire day through.
Soon, after the mobile got “appified”, it was the turn of the desktop next. But before that happened, there is a wide space between the smart phone and the laptop / desktop. This space, which was at first occupied by devices called “Netbooks” which were essentially low-resource laptops capable of running the mobile browser on an energy efficient processor, with a small low-resolution screen and miniature keyboard. Although, the keyboard of these devices brought the tactile feel of interfacing with the computer, which the later tablets lack, the devices fail to deliver the enriching experience that the smartphone delivers to users.
Now what? The tablet found space in between the laptop and smartphone, for users who were looking to engage with the web, using a larger display than the smartphone, with nearly equal and later, even better resolution, fast processing and graphics that was not only capable of running web applications but also highly engaging games. It was soon, that the tablet started to become the preferred gamers console, as opposed to what earlier needed to be connected to a television, aka The Nintendo Wii, Microsoft X-Box or the Sony Playstation.
In fact, the tablet has found space into homes as a device that is used by toddlers as well as older members of the family who would normally find it a bother to use a PC or laptop. Most tablets in the market today, follow benchmarks that have been established by Apple’s iPad. Some tout features that could potentially lead the iPad in terms of specifications, but as far as the App market is concerned, there is no market that has matured as much as the iOS market has – both for the end users, as well as the developers.
The next step that has happened, with the success of applications that run on the smartphone and the tablet, is that the desktop (or the laptop) has seen a movement away from conventional computing. We are obviously, now looking at an all new generation of computer users, who start using tablets and smart devices at ages that were unimaginable about a decade ago. Last week, a close friend of mine was buying the iPad Air as a gift for his father, and he was talking about his 9 month old toddler son, being savvy with his smartphone, and likes to play games that help him recognise shapes on the device. We are talking about a child, who really hasn’t even begun to speak yet. Obviously, this generation of computer users is going to be far different in the learning curve and using technology much more than we did, or our parents did during their times.
But then, there are certain human factors, and requirements which make the desktop (and laptops) with their present user-interfacing systems, more favourable to use than touch-screen interfaces, or hybrid interfaces, such as those offered by the Surface Pro, where you do have a keyboard, but the mouse is substituted for with the touch-screen interface. And I will put my opinion down here, with learnings from my experience and use of devices over the last 30 years.
Let’s face it. The first computer I used, an IBM Compatible did not have a mouse. In my younger days, I also used the Commodore 64 which had a GUI and a joystick and later I also had the chance to use Apple. Through the years, while moving from desktops to laptops which had trackballs, and TrackPoint (the little red dot between the G and H keys) and the modern day multi-touch trackpad, there’ve been a lot of variations available for how we point at the screen. But there have been little variations available for how we type. In fact, many attempt at creating virtual keyboards have been a failure, because they lack the tactile feel users are looking for, while typing in text on the console of the computer.
As of today, while I use my computer which has a multi-touch trackpad, I usually keep a substitute bluetooth mouse at my workstations, both in office and at home, to quickly enable me to point precisely and remain efficient while working. In contrast, I cannot imagine, the drain on efficiency, if I were to use a device that did not allow me to use a pointing device such as the mouse. Typically, I like to reduce the use of a pointing device as much as possible, which has been highly possible until the older versions of Microsoft Office on Windows computers, but off late, I find the evolution of all applications become more mobile centric, and geared towards user interaction using touch-screen interfaces.
This means, that the end user typically loses efficiency with each passing generation of software, especially users like me, who are more used to a keyboard and mouse while writing a blog article such as this, or while programming code.
At the same time though, there’s a development taking place. The desktop browser and the operating environment, is increasingly becoming integrated with the smartphone, and also much more social. For example, on my Mac desktop that runs Mavericks, I have software that integrates iMessage and FaceTime seamlessly from my mobile devices, and allows me to communicate without the interruption of having to pick up my device to make a call or send a message. I have a host of applications running on my mobile phone, but then I also have many of those applications available for the desktop environment, including some of the messengers which are now having seamless desktop integration, as well as news websites that provide push notifications to me at my desk.
Yes, push notifications once again. What push notifications do for me on the desktop are, reduce the need for me to pick up my mobile phone each time I receive a news update that I would like to read. Instead, I get the notification on the corner of my screen and it does not intrude into what I am doing through the day. In case, it so happens that I do not wish to receive notifications during a particular time, like when I am running a presentation, then I can switch my laptop to the Do Not Disturb mode.
This all sounds good as long as the laptop maintains its flexibility to provide me with the command line interface, flexible UIs that are not provided by mobile and semi-mobile (tablet) platforms, which provide me better ergonomics (yeah, that’s what we’ve been discussing all through) and faster access to being to either express myself, write code or perform the traditional operations that I have always used a computer for. I know, future generations may not particularly agree with what I am saying, but then there is the space for geeks and nerds, who are not happy being end users all the time. Question now is, is there a space emerging for an alternate operating system / environment that brings together the quirkiness of an “appified” environment, like the iOS while also providing the foundation of connect that a conventional UNIX environment which programmers love? This will have to be seen.