A supply chain is a system of activities that is required to convert raw materials to the final product spread across in time and geography. Traceability in such systems make it easier to understand what is happening at each step of the process. This is critical especially in the Food and Pharmaceutical industry, where they are trying hard to trace each component that goes into their products.
With better traceability, it will be easier to enforce regulations. Regulatory authorities can more easily make sure that there are no banned substances used in any product or if any banned processes were used.
From a business point of view, it will be easier to optimize value chains for each product and find possible synergies. Currently all businesses do this, but at a higher level. Not every company is interested in raw materials right from the ore to the form they use. But traceability would make it more transparent for businesses to see the history of each delivery and the complete chain.
Within the food industry, there is a growing need and market for products that are ethically sourced. Big companies want to be seen on the right side of history when it comes to sourcing from local farmers and institutions. Traceability in supply chains is one way to improve their overall brand image. For example, the Fairtrade initiative. Which is then used by downstream companies like “Ben & Jerry’s” who source cocoa, sugar etc. under this initiative.
There are a few ways traceability can been introduced to a supply chain. Subway has 98% of their products traceable by using barcodes. RFID tags, alphanumeric codes are possible solutions as well. Blockchain is a relatively new technology that is being used for this purpose. A distributed ledger that cannot be tampered with keeps track of the supply chain. And all stakeholders are free to check the validity of it themselves. A drawback with using blockchains is that these chains become stronger and more viable when the size of the information is huge. For smaller amounts of data, a centralized approach would suffice.
Usually supply chains include contracts between different parties right from the start until the product reaches the customer. Blockchain maybe is a more viable solution to manage these smart contracts in one of its ledgers. These agreements are usually repetitive in nature and can span across different geographies, time-zones, currencies etc. A distributed ledger in this case could act as a common protocol used by all stakeholders to manager and maintain contracts.
Super Apps, a term that became mainstream thanks to a 2015 podcast by Andreessen Horowitz. A super app is a closed ecosystem with multiple apps and services that work together seamlessly. They offer a wide range of options to users within this ecosystem.
WeChat is the classic example of a super app. WeChat started out as a messaging app. Which then branched out to be a social media app. Eventually, they integrated financial services that allowed users to send money to each other, order services, order in restaurants and use WeChat as a default payment method.
AliPay took a different path. They began as a payment app. And then integrated other features.
Why are they becoming more popular? From a user point of view, super apps can tie in different services and offer user experiences that other conventional apps just cannot. A single app is perhaps better at keeping the attention of the users. Less context switching.
This is also a direct outcome of API’s fueling digital growth. Most companies would like to build a brand and business around a service or a product. Some companies are better off with just offering an API and letting other businesses figure out the rest of the value chain. Getting features/integrated to a super app is like getting featured in the front page of reddit, but for API companies.
Even though these kind of apps are most common in Asia, there is growing interest for super apps globally. Google is probably at the right place at the right time to capitalise on such a service. They already run a tight ship with their suite of apps. But packaging them into a coherent set of features of a super app might not be that far away.
AdNauseam is a free browser extension that tries to trick advertising networks by messing with your browsing data. AdNauseam works locally, and itself doesn’t send the data out to any other services. It comes with 3 opt-in features which includes hide ads, clock ads and block malware.
What it basically does is to randomly click on ads on behalf of it’s users creating a some what balanced mix of browsing history that leads advertising trackers astray. This is known as a strategy of obfuscation. This process reduces the value of the aggregated data from the user. The second order effects include polluting the data collected as whole by these services. Imagine even if 1% of users use such a service, it can have profound effects on search results and targeted ads. They could effectively end up showing irrelevant ads to its users leading to a lower conversion rate.
In a way, this directly attacks the incentive in the whole advertising business. The reason why a company would use social companies like Google, Facebook and Twitter to run their ads is because of 2 reasons. Firstly, users spent a lot of time on these platforms and secondly, these platforms have an understanding on what each user is interested in. But if software’s like this reduce the value of those platforms, it will force companies to question such platforms. AdNauseam is a start to a new era of privacy focused applications.
Privacy as a Service
AdNauseam – White Paper
A business model that has existed in the App economy for a while is advertisements. Businesses use ads to monetize their website/app and the users can pay a premium for a better experience without ads. More the people who use the app, more the business gets paid for displaying an ad on their page. Apps over the years have grown to serve large parts of the population and have morphed itself into an ecosystem. For example, Facebook together with Instagram, Whatsapp, Whatsapp Business etc. This in turn has increased the value of user data for a business. Data about a user, about what they are searching or talking about in Whatsapp can be then used to target ads to them in another platform. This is not something that’s new. But the spotlight on how this affects the user’s privacy is new.
Privacy-as-a-service model could be an additional source of revenue. But it could backfire. Less data and user-awareness for a social network company could mean a poorer ad service and reach. Not to forget ads make up for the largest part of the revenue in these companies. At the end of the day, companies make money when users consume. And tracking your data is a gateway to that.
We live in a time where a decent internet connection can bring the world’s knowledge to our fingertips. Smartphones together with that has created this new medium where most of human attention is spent on. And knowing what users are looking at and what their preferences are is a game-changer. Well, knowing in itself is not the game-changer but the scale is. We have the possibility to look at a lot of data before making decisions. However, even with access to data, we tend to take a decisions based on our gut feeling or on someone else’s opinion (on a personal and an organizational level). Simply because that’s how we normally do it. There are lot of decisions that we have to make that can be done in a much informed way like choosing an employer, finding an apartment, deciding a vacation spot or even buying a car.
So even though we are quite into this new medium we haven’t started to fully utilize the potential it brings. In this new medium, we don’t have to physically show up somewhere to get paid. The content we create can do that for us, across the globe.
Every country and their health system tries to report the number of covid-19 cases and related-deaths on a nearly daily basis. However, the actual system in each country works differently. In some countries, a person can have symptoms, they can order a test and that can take a few days. Once the test is submitted the results can also take some time. So in a way the current tally is a reflection of a past state.
This is kind of similar to all the light coming from stars that are light years away from the earth. What we are seeing is a snapshot of the past.
Its good to keep this in mind on how reporting systems work and what the actual data implies in the real world. If for example financial reporting systems in a company is so much lagging behind that the financial statements are lagging to the actual day to day business, it can lead to a discrepancy between price and value quite quickly.
Startups that rely on data models and AI/data science algorithms for running their business will only get better with time. Moreover, as time passes they have leverage over newcomers. Making it really hard to overthrow a leader in a vertical. In this space I think there can be 2 strategies. Either the company can choose to niche down or expand to other verticals. If they niche down, over time they will collect niche specific data and develop models that are effective in that small space. One way such a company can be overthrown is if there is a new technology that doesn’t require as much data to predict consumer behaviour or whatever. While it sounds possible that still seems quite hard. Even if models get efficient and computers become more powerful and be cloud-first. There is a bottleneck in available data. There is only so much real world data that you can harvest. Out of that only so much that you can label and sanitize it so it can be used for training purposes.
Companies are having more and more access to data. Data from suppliers, data from intermediaries, data from customers and prospective customers. Traditional companies are not built in a way to make use of all this data. The data has to go through multiple steps and different handlers before business insights can be generated. Typically, the team that handles the data pipeline is buried somewhere under “Engineering” or “Research and Development”. However, the users of the data are spread across every vertical ranging from sales, marketing, logistics and even human resources. Don’t forget the fact that the company itself generates a lot of data everyday from its operations and employees. The crux of the problem with traditional organisation is that data is harvested and probably can be used in most parts of the organisation/ However, not everyone is well-versed with how to play around with data and make sense of it. The whole process of using data to make better decisions isn’t a one-off ceremony. Rather it is a feedback loop of varying cadence that needs close attention.
Wrapped is Spotify’s year end marketing campaign. This year’s campaign was unveiled yesterday. Spotify collects and distills a year of your listening pattern and history to present the top artists, genres etc. for the year. As per Spotify’s Q1 report, they have about 286 million (MAU). That is a lot of data to process behind the scenes. This year’s Wrapped has some new features.
- In-app quizzes where you can predict your top artist, podcast.
- Story of Your 2020 with your Top Song. A recollection of your most listened song of the year.
- Deep dive into podcast listening
- New badges. Popular playlists can earn you a “Tastemaker”. Identifying a song before it becomes a hit makes you a Pioneer”. Collecting and curating songs to make playlists make you a “Collector”.
- Personalized playlist with your most loved songs and the ones you missed.
- Now you can see global listening trends here. This time open for non-users as well.
The Wrapped involves both distilling data from each user but also presenting it in a meaningful way. An article from last year gives a bit of insight on what happens in the background. Last year’s wrapped had an even larger scope where it wrapped the whole decade instead of just one year. They use a data lake backed in Google Bigtable which is optimized to aggregate data over an arbitrary period of time. Even though the amount of data was much larger, Spotify was able to reuse previously executed jobs. Each user had a row in the Bigtable with each column having result from each year. Decoupling the data processes improves the overall efficiency. User summaries broken into smaller data stories and workflows allowed for a more flexible system. Insights like songs that you might have missed would need a recommendation system that uses these data stories as inputs.
It is projected that cloud computing will account for 13% of world electricity consumption by 2030. A prediction is that, in the future the computing power wont be the bottleneck or the parameter to optimize. It will be energy consumption. Maybe that will even become benchmarks for developing state of the art AI/ML Algorithms. With chips going down to 5nm and server grade hardware pushing its limits as well. The performance bottleneck will soon be insignificant compared to the energy tax each iteration of an algorithm would take.
Energy will become the major operating cost for all the data centers and cloud servers. That will be the one parameter that can make these companies like AWS run more leaner on a daily basis. 2 possibilities than. Either there has to be a breakthrough in how we fundamentally store, retrieve and erase data on physical medium or how we carry out computations. The latter would prove more useful as it is the energy heavy component among the two. Logic in memory is a hybrid approach that combines both aspects and can save energy. The second possibility is that we have to figure out innovative ways to counter the energy problem. Microsoft’s Project Natick has claimed that underwater data-centers are a viable option.