Meta is the next wave of the same ocean in social platforms

Social platforms are the new “community”.

But what does that mean for an AR-VR fueled future?

Social platforms on the internet eventually create its own sub-culture. Unsaid rules on how to behave on the platform emerges. And eventually it begets a vibe.

Each platform encourages or discourages its own set of values. And therefore attracts different kinds of people. The personas of people on @reddit vs. @tiktok_us are different. Of course, some overlap is there, but by and large they gravitate towards different personas.

As a whole, the platform is an unfathomable force on the internet powered by an army of users. The human mind is exceptionally bad at interpreting large numbers. Since it is hard to visualize numbers and end up being under-estimated. 80,000 doesn’t seem like a large number. For reference, that’s how many people you could fit in the Colosseum.

The pattern of a culture emerging from a group is nothing new. We humans have been doing it for thousands of years. It was detrimental to the survival of our race.

To be part of a tribe.

However, the medium of such tribes have evolved.

The medium in which these social platforms thrive is still text, pictures and recently video (@instagram reel, @tiktok_us, @snapchat). Though its been a really tough nut to crack in video form. This is where @meta is at a really interesting juncture. It is creating a medium for the next generation of social platforms.

That to me is leverage unlike anything else.

Accumulating Thoughts

On average we have about 6000 thoughts everyday. What if we could record, track and classify all those thoughts. And feed it into a giant neural network to make a virtual version of yourself.

Writing, speaking, drawing, tweets, vlogs and all other forms of “content” we create are conscious to some extent. However our thoughts are a constant stream. Like a fire hose. Mixed with both conscious and serendipitous thoughts.

Could this virtual self be more objective and less susceptible to emotions. Or will it also have your biases built into it. Could you have this virtual self answer calls for you. Reply to messages?

Could we use this set of thoughts to spawn other forms of virtual interactions. Perhaps chatbots. Imagine a customer support chatbot modelled after Gordon Ramsay. Hilarious.

Internet Money

One use case that will for sure be filled with cryptocurrencies like Ethereum in the coming years is that of internet money.

Just like we had an era of web-first applications and shops, which then became mobile-first. Internet-first money is yet be fully realized. Money that is not tied to a physical entity per se, but at the same time can store and be a medium of exchange of value over the internet.

Money, Paypal credits, Credit are all good forms of value individual users or businesses use to transfer wealth between each other. This can be done via online too but the underlying mechanism is still tied to bank accounts and balance ledgers maintained by different banks. The internet only acts as a proxy for the transaction.

Internet money, on the other hand has the capability to sustain the transaction and it’s underlying mechanism end-to-end within the internet. Imagine if API’s wanted to exchange money and not users or businesses. Internet money would be the default way to do it.

By the looks of it, cryptocurrencies are the most poised to fill in this use-case.


Typically factories follow a serial architecture. At the beginning of the chain is the raw material. Which is then passed onto a series of stations that perform a specific task on the material. This enables large volumes to be produced and opens up possibilities to optimize material flow and tasks performed at each station. But one drawback is that it is difficult to manufacture different variants or even different products on the same production line. Serialized manufacturing units are common in a lot of industries.

Microfactories are a relatively new paradigm for manufacturing units. The factory is split into cells. Each cell is equipped with a set of robots and/or human operators that performs a series of tasks. Think of them as an averaged sum of a few stations from the conventional paradigm. These robots can be programmed to build different products and have various end-effectors for performing a set of tasks instead of just one. What this does is now each of these cells can be programmed to build a whole line of products instead of just one, without compromising on the flexibility.

Arrival, a UK based electric vehicle startup is pioneering this concept to develop microfactories to build their vehicles.

Digital Twin

A digital twin is a collection of digital data representing a physical object. The concept of a digital twin is born within engineering disciplines. A digital twin is created to optimize and test designs even before they are manufactured. The relation between the digital twin and the actual one doesn’t end there. Once manufactured, data and measurements are fed back to make the digital model more accurate.

In some cases, digital twins can be used to predict the life, wear and fatigue of an object before it happens in the actual object.

The digital twin is extended outside of an object within the autonomous driving domain. A digital world is created to mimic the physics and randomness to train and improve models that drive autonomously.

As the cost of computation and the necessary hardware required for doing so comes down, there is a possibility that the digital twin of an object could be embedded within the object.

InterPlanetary File System

InterPlanetary File System(IPFS) is a protocol that can be used to store and share data in a peer-to-peer system. Similar to HTTP, the protocol on the internet to share and store information. One problem with HTTP is that the computer requesting for a webpage has to directly establish connection to the server storing it. Therefore transactions are serial and singled-out.

IPFS on the other hand can download a file from the network from multiple sources. IPFS creates a a cryptographic hash for all the blocks within a file. These blocks can be stored in multiple and different locations. IPFS uses a decentralized name server IPNS (equivalent to DNS for HTTP) to map human readable filenames to these cryptographic hashes. This allows a the requesting node to download several pieces of the file simultaneously from different locations.

IPFS essentially decouples the content from the addressing on a network. This would allow the requesting node to download a file from the nearest possible location instead of being bottlenecked to one location.

Lightning Network

Blockchain are now widely used to perform transactions between two parties where the blockchain ensures the validity of the transaction and records it in the distributed ledger. However, for a transaction to be completed it would require all the individual peer copies of the ledger to be updated. And this can be very time and resource intensive.

Enter the Lighting Network.

The Lightning Network is a protocol that works on tops of a blockchain that allows for transactions to happen off the chain. The protocol allows for a payment channel to be open between 2 parties which is validated by the blockchain. Once a payment channel is set up, all transactions between the 2 parties can happen quite instantaneously. The record of the transaction only needs to be updated in a certificate tied to the payment channel. Once a payment channel is closed, it is recorded in the main blockchain.

An example, a customer can have a payment channel open with a coffee shop and they can use it to pay for coffee everyday. Once the payment channel is closed, say once in a year. The records are ratified and added to the main blockchain.

Robots that can feel

Our skin has many types of neurons that allow us to feel upon touch. They are receptors of different kinds that are triggered by different stimuli but activate the same way upon triggering. Thermoreceptors trigger on change in temperature. Nociceptors trigger on pain. Mechanoreceptors trigger on mechanical stress. These receptors send signals to the spinal cord and the brain to register a touch.

A combination of these receptors spread across an area leads to a triggering in various nature depending on the stimuli. This is how we are able to distinguish textures and types of touch.

In general, robots are really good at predictable motions that can be broken down to a set of axis and have a known distribution of forces. This is why a robot is used for welding different parts of a vehicle body but a human operator is used to install an intricate wiring harness.

Tactile feedback would be a large improvement in the feedback loop for robots if they need even a fighting chance to learn more complex tasks. A new technology developed by a team at the University of Hong Kong allows robots to detect tactile inputs at a super-resolution.

The system uses a flexible magnetized film as the skin and a printed circuit board for the structure. The film creates a magnetic field within the device and the subtle changes in the field is sensed to determine the touch.

Bone Conduction Earphones

Ludwig van Beethoven is rumored to have invented this technology. The famous composer who was also deaf used to press one end of the rod on a piano while having the other end of the rod in his mouth. This would transmit the vibrations allowing him to hear. Rumor or not, but that is a gist of how bone conduction works.

Sound is perceived by our ears through the variation of pressure in air. The ear drum vibrates based on these pressure differences transmitting the vibrations to a set of small bones followed by a fluid filled cochlea.

Bone conduction used the bones of the skull to transmit the vibration to the cochlea directly. Bypassing the ear canal, ear drums and the set of bones in our ear. Bone conduction earphones usually sit on the cheekbones. The isolation and audio quality on these headphone are bad due to the fact that the ear is fully exposed. However, it provides people with hearing deficiencies and hearing loss another method to perceive sound.

Google Glass used bone conduction technology in it’s devices to transmit information to the wearer. X, Alphabet’s moonshot lab, is reportedly working on super hearing technology. They are working on technology that will allow the separation of voices of specific individuals from that of the group.

Apple’s shift to ARM

ARM is both a company and an instruction set used in CPU’s. ARM and x86 (used by Intel) are sets of instructions that a CPU can understand and execute. This depends on the architecture the actual silicon is built on. Hence, synonymous to CPU architecture. While Intel CPU Cores are directly sold to manufacturers. ARM, the company which might be acquired by Nvidia, licenses the standard for other companies to design chips for their own devices. This enables manufacturers to build custom hardware tailor-made for their application. ARM offers the possibility for heterogenous computing. This allows CPU’s to have different cores specifically built for different applications. A common example is a device with a multi-core architecture with specific cores for running machine learning applications.

Apple’s shift to ARM processors for the Mac line-up were in the making for quite sometime. Maybe not in the form of chips for laptops. Apple have always been a proponent of building both the software and hardware for it’s devices. Now they are going all the way. They have really understood the synergies that would bring. It had substantial cascading effects to the feature-set and user experience of their devices. But hardware is hard. Apple chose which components were strategic to it’s product in the future and invested in them. They were successful in doing that for the iPhones and iPad. These devices run on ARM processors that were designed by Apple but manufactured by other companies like Samsung

A moat around apple devices is the high switching costs. Once you are in the ecosystem, it becomes increasingly hard to switch to another platform. A main reason why they were able to make this work is the continuity it’s devices offered. They gave seamless a new meaning with features like AirDrop. It just works. Moving all their devices to ARM processors could mean they are saving even more development cost, as now they don’t have to develop for different platforms. They could migrate features and functionality on the OS and drivers they have perfected over the years in the iPhone to the Mac line-up.

From a business point of view, owning the design of chips meant that they could save a lot. Intel integrated design and manufacturing and charged heavily for their design services. Now Apple can keep those margins for themselves and outsource manufacturing to companies like TSMC.