Accumulating Thoughts

On average we have about 6000 thoughts everyday. What if we could record, track and classify all those thoughts. And feed it into a giant neural network to make a virtual version of yourself.

Writing, speaking, drawing, tweets, vlogs and all other forms of “content” we create are conscious to some extent. However our thoughts are a constant stream. Like a fire hose. Mixed with both conscious and serendipitous thoughts.

Could this virtual self be more objective and less susceptible to emotions. Or will it also have your biases built into it. Could you have this virtual self answer calls for you. Reply to messages?

Could we use this set of thoughts to spawn other forms of virtual interactions. Perhaps chatbots. Imagine a customer support chatbot modelled after Gordon Ramsay. Hilarious.

Flynn Effect

The Flynn Effect is the increase of intelligence test scores over the 20th century. The IQ measure is measured around 100 and their standard deviation on either sides. Over the years, the mean is adjusted based on the sample population and it is observed that the mean shifts upwards a few basis points.

There are a few explanations as to why this is a trend.

Total schooling time has increased steadily over the years. This means that kids would have had more chances to practice their analytical and cognitive skills.

Tests similar to IQ tests are used in a lot of scenarios. The familiarity of such tests has grown.

There is a more stimulating environment while kids are growing up now vs. back in the day. More video games and television stimuli. There are studies that show better cognitive activity and hand-eye coordination among those who game more.

Nutrition, both in quantity and quality has improved over time. There is data that suggests that the brain is ever so slowly growing as well with the improved conditions.

Better health conditions. Developed nations in general are more equipped to handle infectious diseases. Vaccinations are part of everyone’s growing life.

Genetically Engineered Products

All products that are aiming for a sizable market has to be generalized to some extent. To make it efficient to create, distribute and market the product it was essential that the product appealed to the masses. Having minimum variability was key to optimise the business as a whole. Take Apple for example, Back in the 2000’s they were successful partially because of their relatively small product line. Less options for the customers to choose from but why would they need options if the product was just so good.

On the other end of the spectrum are products that are bespoke to an individual. A good example are tailored outfits. By definition there is a need for the product to be unique and match a person’s style. Usually these kind of businesses rely on physical and more tangible parameters of a customer to make a customised product. With the reduced cost of doing a DNA test, there is a new vertical altogether that opens up a possibility for a business to create hyper-customized products.

23AndMe is a company that helps you map out your ancestral lineage using a sample of your DNA. As more and more customers use the service, the better their data and reach will become.

GenoPalate is a business that aims at creating a specialized diet plan based on your DNA. Based on the combination of genes that a person has, Genopalate can come up with an optimal nutritional plan. It can provide analysis on how our body digests various types of food and substances(caffeine, alcohol etc.).

Bundling of Niches

Media in the past were limited by the medium. News by physically printing newspapers. The music industry by CDs. Television by cable with limited channels that could be programmed. Distribution in these mediums were inherently limited. The internet and smartphone duo breaks this.

Until then services were usually bundled. You didn’t have to subscribe to Sports News, World News or Business News separately. They all came as a bundle. The same goes for television. The unit economics made sense to bundle these even if not all customers are interested in each product or service. However with the internet it became easier for services to offer these individual products with no additional cost. And now we are seeing the great unbundling as Ben Thompson wrote in a 2017 article.

There are two problems from the customer point of view. Today there are just too many subscriptions. According to Forbes, as of mid-2019, the average American subscribes to 3.4 streaming services. Managing subscriptions, payments, logins and being able to find the right content for you to consume is often a task in itself. Secondly, most customers have a monthly subscription budget. Which means that they have to choose what they would like to subscribe to.

Recently, there is a great influx of individual creators trying to carve out a space for themselves. Substack has popularised and hyped that anyone with a mailing list could start creating a content and put some of it behind a paywall. Creators focus on niches to gain some ground initially but eventually they too diversify and spread out. Which is not a bad thing, but is it enough to justify the monthly paid subscription even then? Probably yes, because by then readers are not only buying into the content but also the brand around it.

A possible solution where this is headed to is another wave of bundling. The great bundling of niches. An app store of sorts that can provide a wide array of content ranging from Netflix to Substack newsletters, from News shows to sports. There could even be sections for individual creators, journalists and writers. Customers can then mix and match what they would like to subscribe to.

One subscription to rule them all.

Keyboard First

We spend a lot of time on laptops and computers, maybe as part of work or for leisure. There are a limited number of ways in which you can interact with it. You can type, use a mouse or a touchpad, talk to it, use gestures. Out of these the one that is the most common is a combination of keyboard and mouse. That is how we were taught to use a computer. Maybe that is because we were the early adopters but that is not the case now. Kids are born into a tech-rich environment around them like never before. They are much more familiar at a very early age. Now the question is whether typing and using a mouse is efficient. It turns out that typing alone is the most efficient way to interact with a computer. An above average typist can type 70 words-per-minute(WPM). If you really spend some time honing that skill you can even achieve a 100WPM. The average WPM for speaking is 150.

This is nothing new among developers. It has been long since they have figured this out. Vim a text based editor built for use from a terminal. Vim uses a combination of modes and commands that allows you to manipulate text at the speed of thought. The interesting thing is that, most apps we use are not at all optimized to be keyboard-first. All applications have keyboard shortcuts but they are not built into the product such that it becomes second nature to its users. Superhuman has taken a stab at solving this for e-mail. They have built the complete application and workflows around keyboard based triggers. This trend is still at an early stage. The prediction is that there will be more and more attention to keyboard-first applications(Superhuman for X). There is a growing market for software and techniques that can improve your efficiency around using day-to-day software. Keyboard-first will eventually become the norm.

Product Studios

Product studios is a 21st century ecosystem that supports creativity and innovation. They could be the next step for the startup world. With apps available now that makes it easy to build applications, websites and services using intuitive and graphical tools the cost for trying something out has gone down. A product studio formalizes this loop of trying something out from an idea to a product or service delivered to a customer. Studio’s like this will become key in democratizing and spreading the word that it has become so easy to do something on your own now.

Big companies that still innovate on every level of it’s organisation knowingly or unknowingly have built product studios into it. Some companies call this “culture”. Building fast, testing fast, learning fast. When these principles become a part of the company’s DNA, you are not stuck in long meetings trying to convince managers and executives.

Context and AI

AI and ML techniques have made big leaps this year. With AlphaFold solving the protein folding problem to the GPT3 engine that can solve most text related problems using its language model. Computer vision techniques have improved as well. All these research has some drivers behind it. And usually these drivers are based on a research question, grant or a niche. But AI research in the area of understanding real-life context is hard to come by as there is no beneficiary to such a model right away. The Google Assistant is close. As the assistant tries to look at multiple sources of data in different forms like previous searches, calendar appointments, email etc. to make a better decision and provide a better reply. These kind of models require something that can account for the great randomness that is humans. The same set of calendar appointments and search inputs could still have a different meaning and level of importance for different people. Rightly judging this based on more second-order inputs like usage pattern and key strokes might be the next big step.

Cloud Computing and Energy

It is projected that cloud computing will account for 13% of world electricity consumption by 2030. A prediction is that, in the future the computing power wont be the bottleneck or the parameter to optimize. It will be energy consumption. Maybe that will even become benchmarks for developing state of the art AI/ML Algorithms. With chips going down to 5nm and server grade hardware pushing its limits as well. The performance bottleneck will soon be insignificant compared to the energy tax each iteration of an algorithm would take.

Energy will become the major operating cost for all the data centers and cloud servers. That will be the one parameter that can make these companies like AWS run more leaner on a daily basis. 2 possibilities than. Either there has to be a breakthrough in how we fundamentally store, retrieve and erase data on physical medium or how we carry out computations. The latter would prove more useful as it is the energy heavy component among the two. Logic in memory is a hybrid approach that combines both aspects and can save energy. The second possibility is that we have to figure out innovative ways to counter the energy problem. Microsoft’s Project Natick has claimed that underwater data-centers are a viable option.

Spatial Computing

Spatial computing can be considered as an extension of IoT for objects in space. It is a concept that includes various technologies like virtual reality, augmented reality and mixed reality. AR headsets like the HoloLens use “spatial computing” to interact with real objects around them. A futuristic scenario is when the objects around you are not only connected to a common media like the internet but also orchestrate themselves to achieve a higher level goal. Or if you could interact with objects that are thousands of miles away or fix machines that are in the ends of the planet.

For a technology like this to manifest there should exist a high speed, low latency network that will allow this kind of high bandwidth communication across multiple objects and the subsequent processing of that data. Another prerequisite could be the need of a common language that these disparate objects and underlying technology can use to talk to each other. Lastly, the use-cases need to be simplified and dumbed down to make it profitable and viable for development and innovation. A use-case that is quite common now an array of sensors spread in the aisles of a supermarket to better understand the behaviour of the customers.

Related

Embedded Intelligence

LiFi

LiFi stands for Light Fidelity and is a communication technology that uses light to transmit data between two devices wirelessly. Transmission can occur in the infrared, visible or ultraviolet spectrums. However, since they cannot penetrate walls and can get attenuated by the medium, LiFi is limited to short distances. Light reflected off walls can be used to extend the range. Light does not get affected by electromagnetic fields and this makes it suitable to use it in conjunction with other systems like radar.

Every light fixture in the building can contain a modem and chip to facilitate the communication. Since the light flickers at a very high frequency, it will not be visible. The fundamental requirements on the hardware to set up a LiFi network is quite small making it a viable candidate to scale. Every light based IoT device could possibly be used as a node in a LiFi network. Although they are limited to confined spaces, without much interferences from the sun. If that can be solved in some way, there is a huge potential for disruption. Traffic lights and vehicles in the vicinity will be able to talk to each other. Aircrafts and the ATC could use it as a communication channel. Cameras can be connected to all the lighting equipment in a movie set. Then the important question to ask is if the hardware will become so cheap that it would be a better option than using a BT or any other RF based hardware and software stack.