#22: Everywhere Apps

The following are the show notes from the talk in the video above. The notes don’t exactly flow in the same order or indeed flow very well – they are meant to act as a reference to the video rather than as an independent post.

POSIX

POSIX was the first attempt at making apps once, while running on all flavours of UNIX. POSIX stands for Portable Operating System Interface; a family of standards specified by IEEE for maintaining compatibility among operating systems. Therefore, any software that conforms to POSIX standards should be compatible with other operating systems that adhere to the POSIX standards. These standards were defined using the C programming language as the reference standard. The effort started in the 1980s, and continued actively well into the late 1990s. A better guide to POSIX here.

However, things were in practice not quite as interoperable as all that, especially where hardware dependencies such as graphics cards or numeric co-processors were involved. Other divergences were in areas such UX standards and ways of handling advanced functionality such as browsers. This in effect meant that only a few kinds of applications were interoperable even between fully POSIX compliant systems such as AIX and MacOS – many were not.

Ajax

Browsers used to offer common capabilities for publishing static web pages but dynamic web pages still required browser-specific extensions. In 1998, Microsoft introduced the XMLHttpRequest (XHR) scripting object, which came to be adopted by all other browsers soon. This single call allowed browser-side javascript to access the contents of any URL programmatically, instead of requiring a user to click on a link. The open source community soon latched on, and soon Ajax (Asynchronous Javascript and XML) was well on its way to rule the world. The “asynchronous” part was the key idea, which meant that different web calls could be made at different times as needed, instead of only once at start. Eventually, jQuery came along – a simple library of utility functions that made XHR and other javascript activities much simpler – and everyone started building on top of it.

Hybrid App

The result of a hackathon in 2009 was a very smart application by a small canadian company Nitobi Software that they called Phonegap. Nitobi’s developers took a standard javascript/HTML5 web page and encapsulated it into an embedded browser that could be run like a native app on an iPhone ( later on Android). For simple apps, you could get instant “native” versions deployable to the app store – the quotes are because it wasn’t really native code but javascript in a browser container that looked like native.

This spawned a generation of hybrid apps. Big companies got into the act – IBM Worklight, Intel’s appMobi all used the same core technology, as did Kony founded by Indian enterpreneur Raj Koneru. Nitobi themselves were acquired by Adobe in 2011.

Today, hybrid apps are no longer in favour. Worklight is dead, appMobi has pivoted, Kony was sold off in pieces and Adobe discontinued Phonegap. Today Apache Cordova survives as an open source project. The main reasons for this; hybrid apps are noticeably slower than native, have limited and complicated wasy to access native hardware features. Plus they don’t automatically handle a range of screens and orientations gracefully, so there’s a lot of incremental effort.

Progressive Web App

PWA was a once promising but now largely abandonded model of running web apps on desktop and mobile in a way that looked like native apps. Proposed in 2015, it allowed people to build once (HTML+Javascript) and run anywhere in a way that looked like it was native including an icon on the desktop, its own window rather than inside the browser and offline capabilities.

PWA were at heart web apps with a few frills:

  • The base web application
  • A manifest describing client settings, such as icon, window behaviour etc.
  • A set of service worker jobs for offline capabilities.

This offered some advantages over standard web applications, which required an always-on connection and often had long initial load times because of the weight of the start page. In PWA, this was already downloaded and waiting on the desktop so startup was near instant. In was better than native desktop and especially mobile applications because it was quick to download and continuously updated (no executable to distribute periodically). These were huge advantages in the low-bandwidth world of 2015, when mobile data connectivity was very slow and always-on was a distant dream. Native applications on the mobile used to be 20-30Mb in size and take an hour or more to download; even more so on the desktop with apps being much larger (200-500mb) home broadband often being 10mb or less.

In this timeframe, many companies rolled out PWA to great fanfare – Lyft Lite, Twitter Lite, Makemytrip, Starbucks, Treebo etc – and this was widely touted as the future of all apps – no more native app building, the PWA would be used everywhere. Native apps still did a lot more (access to hardware specific features such as bluetooth or GPS or 3D accelarators) but the thinking was that the bulk of the focus would move to PWA, especially on mobiles where the benefits were even more enormous.

Five years later, PWA is all but dead. Those star case studies all either discontinued their PWAs or have gone back into native-first strategy with PWA only as an option. PWAStats, a site that tracked PWA success stories – most of the stellar ones are years old and no longer in action. Starbucks is a good example; it was a PWA flagbearer but no longer – now their main site is a standard web app (www.starbucks.com, which is not a PWA), native IOS app and native Android app, even though the PWA site app.starbucks.com is still alive. Others such as Makemytrip launched PWA apps to much fanfare but it no longer works – Makemytrip has native apps instead. The same story with Treebo or Twitter; indeed Lyft seems to be the only one still actively focusing on its PWA (Lyft Lite) though on iPhones Lyft prefers its 420MB native app. Indeed, a PWA is quite hard to find nowadays. In 2021, most major browsers announced the intention to discontinue support for PWA by 2022.

Two key changes caused the downfall; first the bandwidth problem largely disappeared in most countries – 4G made it quite practical to download and update -even 20-30mb application apps became a few minutes of download on mobile data or at home. Second, it turned out to be more difficult to build and maintain PWA than pure web apps – especially those service workers that worked behind the scenes and provided most of the PWA magic. This chipped away at the main advantages of PWA, while the disadvantages remained. In particular, native hardware capabilities increased enormously – finegrained location and camera control, motion sensors, 3d animations, bluetooth, NFC, etc – all of which were out of reach of a PWA. Starbucks native Mobile App is a full payment wallet using NFC, while the PWA version is not. Finally, native apps became a lot easier to build with much better tools and a large pool of skilled developers.

Responsive Design

At the same time as PWA, another emerging trend was reponsive web design. The idea was to create a web page once but allow it to respond to changing screen sizes, making it multi-device compatible. Cascading stylesheet standard CSS3 stadard started maturing at the same time, and by 2018-9 this had become a cornerstone of web design. Almost all web apps today are reponsive and thus run everywhere (kind of) but don’t replace native apps because of the limitations of the browser model. However, web apps have become very useful in the last decade and they provide a pretty close experience to the Everywhere App.

Cross Platform Compilers

The idea behind cross platform compilation (or just cross compilation) is that you can take one code base and compile into another language. Just as a quick refresher, compilers turn code written in a high-level language such as C or Pascal into machine code that can run on the target hardware. Cross-compilers play a different role; it takes code written in (say) C and outputs (say) Java instead of machine language – such a compiler would be a C-Java cross compiler and allow any C programmer to ship Java code without actually learning Java.

Cross-compilation is particularly popular in languages that share the same base – such as C# and Visual Basic (both ECMA compatible) but there is no restriction – theoretically any complete computer language can be cross compiled into any other since (unlike human languages) they are 100% deterministic (no ambiguity).

In the context of today’s discussion, cross-compilers can be used to take code written for the desktop and compile it into code meant for mobile apps. This is important because mobile platforms have preferred languages – Android apps are required to be written in Java or Kotlin while iOS needs Objective-C or Swift. If you need native code to run on both platforms, you need a way to compile Kotlin code in Swift or something similar.

The most popular cross-compiler project in this context was Mono – later Xamarin – which compiled C# code into either Java for Android or Objective C for iOS. Till recently, Xamarin was the leading choice for developers opting for this approach. The cross-compiled code is then compiled using platform-native compilers into native code.

The advantage of a cross-compiler is that it produces native code that runs nearly as fast as hand-crafted native code (none of the overheads of PhoneGap). However, its not the answer to everything – drawbacks are that each platform requires some code-level tweaking, not everything is supported and the cross-compiled code may have translation defects arising from bugs in Xamarin.

Common Language Runtimes

Yet another approach to Everywhere Apps is a runtime library for each platform that provides a common set of capabilities – allowing code to be written to this common standard and run anywhere. By far the most popular of these is Java, with Java Virtual Machine (JVM) as its common runtime. iOS however remained hostile to Java, so though Java had a headstart by being native to Android (or maybe because of it) Apple refused to adopt it. Other options filled the breach – most popularly React Native from Meta (not to be confused with Javascript library ReactJS also from Meta).

React Native is one of the most popular approaches today for building Everywhere Apps . It creates apps that are fast, responsive and have access to native features. Thre are some limitations – a steep learning curve, complicated design choices, platform specific tweaks that cause maintenance headaches and a performance that still lags native apps. It’s great for many kinds of applications but, as with so much else, not for all.

Flutter is often mentioned in the same breath as React Native, especially because it has better performance than the latter. Flutter technically is a different approach – it complies to native ARM code rather than running a common virtual runtime and this removes most of the performance overheads. The downside is that the design approach is very prescriptive, anything but Material Design is a huge amount of work and the resulting app can be very large. Plus, you have to learn Dart, a language with arguably a steeper learning curve than either Swift or Kotlin and without the rich library ecosystem of JavaScript.

EFK

EFK is another combination of core components of search, log ingestion and display of logs. Elasticsearch provides the search, FluentD provides the ingestion and Kibana the display.

Other combinations are also possible. Logstash is the original member of this trio and formed the ELK stack (now called the Elastic Stack and all owned by Elastic.co). Another alternative is the PLG stack (Promotheus, Loki, Grafana) all from Grafana Labs which has a few fans (here’s a good blog post about it).

#21: Important ML Algorithms

I’ve copied the text from the links that lead to it. Most are from an excellent IBM site.

Linear and Logistic Regression

Both linear and logistic regression are among the most popular models within data science, and open-source tools, like Python and R, make the computation for them quick and easy.

Linear regression analysis is used to predict the value of a variable based on the value of another variable. The variable you want to predict is called the dependent variable. The variable you are using to predict the other variable’s value is called the independent variable.

https://www.ibm.com/in-en/topics/linear-regression

Logistic regression estimates the probability of an event occurring, such as voted or didn’t vote, based on a given dataset of independent variables. Since the outcome is a probability, the dependent variable is bounded between 0 and 1. 

https://www.ibm.com/in-en/topics/logistic-regression

Discriminant Analysis

Discriminant analysis builds a predictive model for group membership. The model is composed of a discriminant function (or, for more than two groups, a set of discriminant functions) based on linear combinations of the predictor variables that provide the best discrimination between the groups. The functions are generated from a sample of cases for which group membership is known; the functions can then be applied to new cases that have measurements for the predictor variables but have unknown group membership.

https://www.ibm.com/docs/en/spss-statistics/25.0.0?topic=features-discriminant-analysis

The Digitalvidya blog has less mathematics, more explanation on the same topic.

An Indian genius played a key role in this. PC Mahalanobis set up the Indian Statistical Institute and was responsible for the Planning Commission. He also invented the Mahalanobis Distance, a key theoretical advance in discriminant analysis.

Neural Networks

Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.

Artificial neural networks (ANNs) are comprised of a node layers, containing an input layer, one or more hidden layers, and an output layer. Each node, or artificial neuron, connects to another and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network.

https://www.ibm.com/in-en/cloud/learn/neural-networks

Anomaly Detection

In data analysis, anomaly detection (also referred to as outlier detection and sometimes as novelty detection) is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority of the data and do not conform to a well defined notion of normal behaviour. Such examples may arouse suspicions of being generated by a different mechanism, or appear inconsistent with the remainder of that set of data.

https://en.wikipedia.org/wiki/Anomaly_detection

#12: Low Code, No Code

Coders have been around since the dawn of computers, but lets face it – no one really wants to write code to get things done. Low code or No code solutions have been around for a long while, but they used to have two problems – either very limited in what they could achieve and only within a closed ecosystem, or “low” was still a lot of code, often a new script that was hard to learn. Lotus Notes, MS Access and our very venerable MS Excel were all touted at one time or another as “Low Code” but while they enjoyed considerable success in pockets, never came to change the world.

We’re trying again, with a new generation of tools. Technology has progressed s these are far more general-purpose. and have fewer restrictions. Will this wave finally drown coding for ever?

We’ve already look at Website builders that are no-code earlier, these tools are more ambitious. They aim to build full fledged applications that can run businesses processes in the real world. Many can generate applications that meet the stringent conditions of Google’s or Apple’s play store.

#11: Usability

We’ve talked in the past about user experience, today we’re focusing on one narrow part of of a great user experience – usability – which in my opinion is the most valuable piece in the pie. Getting usability right gives more bang for the buck than any other aspect of user experience, so that’s what we focus on today.

There are many sources of usability studies, but an influential one originates from the erstwhile Sun Microsystems. The company was the dominant player in what was then called the “Workstation” and it was important for these to be highly usable machines. Sun hired a distinguished Danish engineer – Jakob Nielsen – who used his work at Sun to originate the field of web usability – specifically focused on the usability of web applications. In particular, he pushed the cause of “discount usability” – fast, cheap improvements in usability based on some basic techniques to identify and improve usability.

This is one post where the video is important – its not enough to read the text. If you’re completely video-challenged, however, do click on the images and links in the post to understand the topic overall.

Defining Usablity

Usability is a measure of how able or fit something is for its intended use. Nielsen defines five measures of usability as below:

Usability is important because it is a very important driver of adoption, productivity, and frequency of usage. Eternal apps or internal – usability is important.

10 Usability Heuristics

Nielsen gives us 10 guidelines to ensure good usability when designing an application interface.

One mistake I made in the video – Match between real and virtual is called skeumorphism not anthropocentrality.

Hick’s law

Application interfaces are often about navigating choices, and we’ve always assumed choice is good. Psychologists William Hick and Ray Hyman spent many years studying how people process information presented to them, including how people choose, and this resulted in Hick’s Law which says that in most cases, more choice is not better.

People have a limited ability to choose between multiple options, and as these options increase the time taken to decide and the difficulty in making a decision increases even faster. Other studies have shown how people find the idea of choice attractive, but in practice struggle to choose if there are lots of options.

Hick’s law is not about choices per se, but the process of choosing. Its important therefore to be concious about how people choose. Prof. Iyengar has a nice TED talk on the subject. Choice architecture is about how to present choices to people so that quicker decision-making and lower errors result.

Usability vs User Experience

As I’ve already mentioned, usability is a part of overall user experience but to my mind the most valuable part. Usability is essential, other parts of user experience are nice-to-have improvements. Without usability, it is unlikely that any other goal of user experience will be met.

Beautiful products are no guarantee of usability. There’s Apple’s famous stumble with the “hockey puck” mouse – often called the most reviled mouse ever though it looked quite stylish. On the other hand, no one would accuse the original Gmail of being particularly pretty, but boy was it usable. Javascript, conversation threads and an incredible 1GB (500 times more than alternatives) made it the most usable email by miles.

Usability beyond computers

Great examples of wonderfully usable objects are not restricted to technology. from teapots to paper clips, usable design (and its opposite) is everywhere. One book I like is “The Design of Everyday Things” by Don Norman, which talks about good and bad design everywhere from computers to cars. Another book is “The Evolution of Useful Things” by Henry Petroski, that talks about forks and zippers and screw drivers and how irritation with the usability of existing objects often leads to revolutionary products.

Bottom Line

Get everything right if you can. If you’re getting only one thing right though, make sure that one thing is usability.

Guest Speakers

Nidhi talked about the awesome experience and usability of Amazon’s Rakhi eGift. No images are attached because Rakhi is over, so you’ll just have to see it in the video above.

Priyanka introduced us to the awesome trends site – Exploding Topics. The site is an AI enabled site that collects trends – in everything, not just technology and over long periods of time – upto fifteen years! Its a great way to track what’s happening to a huge number of areas in the world around you. Did you know, for instance, that there has been a surge of 16.5X interest in “squat proof leggings” over five years?

You could also use it to research more serious topics – fintech, banking trends, political trends and suchlike. Enjoy!

That’s all, folks. Please leave feedback.