App DevelopmentWeb Development

What Minimum OS Should We Support? | The Ethics of Version Obsolescence

Image Source: BankMyCell

Every year, Apple and Google hold their large conferences (WWDC and Google I/O respectively) where new functionalities are announced for their platforms. What follows is a large amount of hype from developers on these platforms, experimentation, and development of best practices around how to integrate these new functionalities into the development process. I have been in numerous conversations where this has been the guiding light leading the answer to the age old question, “What minimum OS version should we support?” Developers are excited to try the “new” thing and sometimes with major changes that requires dropping support for OS versions that don’t have that functionality. I’m here to argue that novelty and innovation shouldn’t be the driving factors when deciding which OS versions to support, especially when it comes at the cost of ethical app development.

Novelty and Innovation

It is important to know when to begin implementing new technologies on your mainline applications. This should come after internal vetting and best practice development, but eventually you should start pulling from the wealth of new functionalities that are being introduced on your platform regularly. Most of the time this should not affect the OSes you decide to support. It is possible to check for availability of the functionality either at compile time or runtime and handle the logic differently if the user’s device has the desired functionality implemented or not. This will allow a user on an older OS version to still use the rest of the application.

There may come a time when the new functionality you want to use is considered integral to the application, and it doesn’t make sense to release the application on any OS that doesn’t provide this functionality. This is a rare scenario, and often you can architect your application in a way to segment this functionality off. If this scenario arises, though, it should be noted as a downside to the production of the application.

Development and Testing Effort

Supporting multiple OS versions obviously comes with added development and testing effort. There are more scenarios to handle, and that means there are more ways that something can go wrong. If you are architecting the app to take advantage of new technologies on some OS versions, but have a slightly different experience on OS versions that don’t support that functionality, the complexity grows even larger. Furthermore, supporting more OS versions often means supporting more devices, and particularly on Android this can quickly balloon the number of test devices you need in order to comfortably consider the app well tested.

There are ways to mitigate some of these issues (for instance, if you are utilizing automated UI tests to verify user flows, you can often use device farms to test on a wide array of device types) but ultimately it is worth balancing that there is some cost.

Ignoring Your Potential User Base

Whatever versions you choose to support, unless you choose all of them, there will be people who you drop from your potential user base. This should be a conscious decision and you should be able to live with knowing who you will not be providing your application to.

The first thing that you have to do when analyzing how choosing what OS versions to support will affect your user base, is to look at the adoption rate on the platforms you are developing for. As of June 2020, 98.6% of iPhone users are on the latest two major versions of iOS while only 39.5% of Android users are on the latest two major versions of Android. These numbers vary depending on a lot of factors, but it makes it obvious that having a loose rule around what versions you want to support won’t suffice. It’s worth noting that focusing on iOS because of it’s high OS upgrade adoption rate ignores the fact that Android currently has a 74.13% market share.

I know we like to think of all of our work as critical, but the fact of the matter is that some applications are more critical to a person’s lived experience than others. If you are developing an app for health care or managing someone’s life savings for instance, then it is worth considering what the level of effort would be to support the greatest number of users possible. Acknowledging the harm that could be caused to certain communities by not making the application usable is important when making this decision. This isn’t to say that “non-critical” applications shouldn’t consider this angle.

The goal of developing any application should be to better the user’s life. Class and age are inextricable from the decision that only those who can afford the cost of new devices and spend the amount of time it takes to learn how to use the latest devices are the only ones that deserve quality of life improvements. The users that tend to hold onto their devices longer are primarily ages 65+ and more than a quarter of low-income households own only a smartphone and don’t have broadband access in their homes, compared to 5% for high-income households.

Additionally, once these populations aren’t able to access your application, any analytics you have around how the application is used inherently will be discounting what those non-users might have done. It’s easier to fall into a feedback loop where what you decide to develop in the future based on the data further alienates these populations.

A Case for the Environment

Planned obsolescence, or purposefully shortening the lifespan of a product to drive sales of newer versions, has obvious ethical implications in regards to deceiving your user base. It also has environmental concerns that are in line with the sort of ambivalent view towards unplanned obsolescence that we are discussing here. In both cases, the producer of the product is pushing forward a mindset of replacement and disposal of a thing that might work well enough for many years. Apple has been accused of slowing down older phones in order to drive sales of newer phones before, but they don’t generally have to do this in order to get their customers to throw out their old phone and get a new one every couple of years (the average iPhone remains active for 18 months). Developers are helping them out with this every time we decide to drop support for older models. The initial building of a new smartphone contributes between 85%-95% of the phone’s total carbon dioxide emissions for two years of its life and those emissions equal what is output powering an existing smartphone for around 10 years. Every time we choose to drop support for older models or don’t fully support them when we say we do and let their quality diminish, we are making the experience of using existing hardware more untenable.

So, Which Is It?

There are many factors that should be considered when deciding what OS versions to support. Each project is going to have different justifications for supporting or not supporting specific OS versions. There are costs to supporting older versions and at some point it would be cheaper to buy a new phone for every user that would use your application than to pay the development costs of maintaining older versions. However, it should be clear that often those costs are warranted. Eventually you will want to have adopted what is now new, but transitioning there should be a deliberate decision and not done solely because it will make our jobs more interesting. At the end of the day, the thing we’re producing should benefit as many people as possible and should help us to do that again and again in the future.

Need guidance determining which OS versions to support?

Get in Touch

Moving from Monolith to Microservices Architecture

When a client decides to move from a monolith platform to microservice architecture,...

Read the article