A forgetting curve is a representation of memory retention over a period of time. Based on some early studies conducted by Ebbinghaus, it was shown that the forgetting curved follows a logarithmic curve. Given that there was no attempt to retain it. But how do you retain a memory.
Memories in our brain become more consolidated when we try to recollect it. That is why being tested on new concepts help you remember them more. However, testing yourself right after you learned something doesn’t help. This is where spaced repetition comes into play. The more spaced out the repetitions are more you can retain with a reducing effort every subsequent try.
When we encounter a new experience and we recollect an already existing memory. There are two possibilities that can happen. If the new experience is in agreement with the existing memory our brain consolidates even more. If it is differing, our brain creates a new memory based on the original memory. A representation is shown in the below picture.
Complexity theory and related concepts emerged quite recently, towards the late 20th century. Complex means diversity, multiple concurrent interdependencies between different elements of a system. Human cells in itself are a very complex system. However they can self-organize and form groups. Create variations and give rise to even more complex systems like a human.
Complexity theory studies complexity systems. A system composed of many components that interact and have different dependencies with each other. Key concepts include systems, complexity, networks, nonlinearity, emergence, self-organization, adaptive.
A business is also a highly-complex system. Comprising of complex individuals both in an around the business interacting with it simply by creating products, selling/buying them.
Our skin has many types of neurons that allow us to feel upon touch. They are receptors of different kinds that are triggered by different stimuli but activate the same way upon triggering. Thermoreceptors trigger on change in temperature. Nociceptors trigger on pain. Mechanoreceptors trigger on mechanical stress. These receptors send signals to the spinal cord and the brain to register a touch.
A combination of these receptors spread across an area leads to a triggering in various nature depending on the stimuli. This is how we are able to distinguish textures and types of touch.
In general, robots are really good at predictable motions that can be broken down to a set of axis and have a known distribution of forces. This is why a robot is used for welding different parts of a vehicle body but a human operator is used to install an intricate wiring harness.
Tactile feedback would be a large improvement in the feedback loop for robots if they need even a fighting chance to learn more complex tasks. A new technology developed by a team at the University of Hong Kong allows robots to detect tactile inputs at a super-resolution.
The system uses a flexible magnetized film as the skin and a printed circuit board for the structure. The film creates a magnetic field within the device and the subtle changes in the field is sensed to determine the touch.
Grid cells are special types of neurons that help us perceive position in a larger context. For example, to understand our position in the room we are sitting in. The cells themselves are arranged in a manner of a grid and fire based on our position. An array of such cells successfully encodes location, distance and direction. Grid cells are seen in the neocortex of the brain.
The neocortex is the part of the brain that is involved in higher-order brain functions such as cognition, spatial reasoning and language. A classical view on how a neocortex works is that it receives sensory inputs and is processed in a set of hierarchical steps. Where the sensory information is passed on from one region to another. It is assumed that a high-level object can be grasped when the information has passed through all the regions once.
This paper proposes a new theory. It starts by saying that there are more of grid cells in the neocortex. Arranged as columns and rows. Each column creates its own model of the objects based on sensory inputs. Each column would build a model based on slightly different inputs. These models than vote to reach a consensus on what it is sensing. As if, there are many tiny brains within our brain and what we sense and perceive is a weighted average of all the outputs.
A deep learning network contains multiple learning nodes separated as layers and interconnected both within and across different layers to create a network. Typically deep learning models are trained by activating all nodes for each training input. Another way to train the network is to sparsely activate the network for each input with the help of a Switch Transformer. This would mean that only a subset of all the nodes would be active and the subset would vary depending on the input. Sparsely activated networks would have a constant computational cost despite the size of the whole network. The key feature of sparse activation is that it enable the different parts of the network to specialize in different kinds of inputs and problems. More like how the brain is. Our brain have different regions that are responsible for different cognitive functions. However, this also brings new challenges like load balancing. To avoid over training of some parts of the network and vice-versa.
Google has trained language models consisting of 1.6 Trillion parameters using this technique. The nearest model in this area was that of GPT-3, which consisted of 175 Billion parameters. This gives an idea of the leverage these models have.
They both are similar in some aspects.
– Both have an investigative component. There is a process of uncovering information that is not apparent or not in plain sight to perform equity research. Similar to how investigative journalism works.
– The element of filtering signal from a lot of information and creating a narrative around it. Historians are particularly good at this and there are plenty of them in both verticals.
– Both models include a wide variety of topics. In equity research it could be stocks, bonds, economics, and the different industries. And in newspapers you could see politics alongside sports alongside business.
– Both have to adapt to falling information costs. The gap between “quality” and “quantity” became more.
– Both compete for engagement. They want to write what their readers want to see.
But these differ as well. In the newspaper industry there are 3 main players; the editorial, the advertisers and the readers. While in equity research there are research analyst firms and institutional investing firms. The business models are quite different.
Batteries have always been a limitation for electrifying devices and vehicles.
“Today, batteries account for a substantial portion of the size and weight of most electronics. A smartphone is mostly a lithium-ion cell with some processors stuffed around it. Drones are limited in size by the batteries they can carry. And about a third of the weight of an electric vehicle is its battery pack. One way to address this issue is by building conventional batteries into the structure of the car itself, as Tesla plans to do. Rather than using the floor of the car to support the battery pack, the battery pack becomes the floor.” – Wired
One way to increase overall efficiency is the reduce the weight of the battery storage. This can be done either by embedding the battery within the structure or making the structure itself the battery. In a structural battery, the cells have to be molded into the shape of an aircraft body or a smartphone case. But having structural batteries is a huge safety risk. A crash or dent could potentially set of an unstoppable chemical reaction. Aviation is a hard industry to electrify. Simply because the fuel used is 40 times more energy dense than typical lithium batteries. This would mean the airplanes would end up being really heavy. Embedding batteries into different parts of the structure isn’t as efficient as making the structure from the battery itself. New combinations of cell chemistry are being researched upon, where the electrolyte is a semi-rigid polymer resembling cartilage. And these cells could potentially be embedded into moving parts like robots just like fat. Fat is an efficient energy storage, it is distributed across the body and it serves other functions like insulation as well.
Federated Learning is a machine learning technique that trains algorithms with separate local samples and without exchanging them. This enables training of an algorithm using multiple devices. This is a huge plus from a data privacy and data security point of view. The basic principle is that the algorithm is trained on the locally available data and the resulting model parameters of the algorithm are then exchanged to other instances. The “other instances” can be either centralized or decentralized. Determining data characteristics from just the parameters is close to impossible. Splitting the datasets into smaller local sets counteract the bias that maybe only seen in some data sets. Smartphones use this form of learning where a central model is retrieved from the cloud. The local data produced by the smartphone (for example, usage statistics, keyboard strokes etc.) is used to update the model. The updated model is then sent back to the cloud over secure channels. This shields the raw user data from the external cloud infrastructure.
A major field where this can be used is in the digital health vertical. Starting from all the data that is harvested from wearables of consumers to data from hospitals and insurances. It fits into the criteria of having a very dispersed data and can still fulfill legislation like GDPR.
Brain simulation deals with creating a computer model of the brain. Such models can help in understanding diseases and reduce the need for animal experiments. The challenges in brain simulation are:
Scale : The human brain contains about 86 billion neurons each with about 7000 connections. This pushes even the largest exascale computers to its limit. Exascale = quadrillion operations per second. Rat brains is the state of the art for now.
Complexity : The exactly mimic a neuron and its molecular scale processes, each of these model neuron should have an unlimited set of parameters that need to be trained. Studies are ongoing to see what parts of these are important to achieve a better simulation and what parts can be left out.
Speed : Learning and training in the brain occurs over years and the current technology limits us to run anything faster than real time. This puts a hard constraint on the depth to which we can train a model. This ability to model the speed and perhaps augment can open doors to better simulate the synapses.
Integration : Our brain consists of different regions that handle different functions. To model this we need smaller models and combine them to achieve a brain wide function. This can lead to simulating aspects like consciousness and understanding.
Some interesting questions
Would such simulations lead to generation of more human aspects like consciousness and imagination?
Can such a model be used to augment the capabilities of the human brain?
Can we transfer information between such models in a more intrinsic way?