FutureAI 2024 annual letter hero image

Our mission is to put superintelligence in the hands of every single person, small business, commercial enterprise, school and government in the world to accelerate their own mission. We’re here to enable access to building the next generation of apps that advance the next generations. Because, it’s our belief that providing one access to something allows them to go do something that they previously weren’t able to do, now giving them the capability to go off and change the world.

LETTER FROM THE CEO

The purpose of this Annual Shareholder Letter is to create one place where we review the work we did last year and state our charter and mandate for the upcoming year. We have a lot going on and so rather than create separate materials across our network, I decided to use this format as our distribution method to communicate what’s on the horizon for our business. It’s a bit unconventional at this stage, but we’ve never taken the conventional path in anything we’ve ever done, so why start now.

If you want to go straight to the good stuff, go to NOW: TIME FOR LIFTOFF, and if you want only the great stuff, go to WHAT TO LOOK FORWARD TO IN 2025.

Signature of Lee Hnetinka, CEO of FutureAI

Lee Hnetinka
Chief Executive Officer
FutureAI

THE PAST YEAR: CORE INFRASTRUCTURE DEVELOPMENT

Last year we developed our own specialized Latent Dirichlet Allocation (LDA) model, leveraging a proprietary email dataset for training to develop a method for extracting the core understanding of user communications. The model's sophistication extends beyond basic topic extraction, as it takes topics (core categories of a user’s communications) produced and reconstructs them with deep contextual insights. For instance, when analyzing a specific topic, the model will distinguish between a recreational tennis player, a tennis spectator, or if this is a professional tennis player’s communication it is analyzing, providing a very granular understanding of a user. The innovation in this model is the amount of time the model takes to generate the level of granularity and accuracy achieved, which is in the range of seconds, whereas previously models of this nature generated them in minutes.

As we delivered this model to our first pilot customer which was building a generative social network, the need for a relationship graph model arose. This would be the second model we built and delivered by leveraging interaction data from users across direct communications, carbon copy patterns, message frequency, and semantic tone, to quantify relationship strength between two users. We delivered both of these to our pilot customer and enabled them to overcome the cold start of traditional social networks.

As we saw how our data ingestion pipelines and specialized models played a significant role in accelerating and overcoming the cold start process for Linkedto.ai, a new generative social network and our first pilot customer, helping to demonstrate who a user is “linked to” in their network through a generative process and the semantic natures in their emails, this made us look at how we can serve a broader customer base, especially government agencies at first.

We started with the simple vision and value proposition for accelerating their mission: deliver a platform with multi-modal data ingestion pipelines, with a standardized SDK for prompting users if user data is necessary to ingest and allow this to be paired with any underlying LLM or specialized model for different types of inference, privacy and security needed.

This is what we have built, and this is the first time we started to think of FutureAI like a modern gas station that delivered different levels of gasoline octane (in our case octane levels we determine are based on the data, which is oil in our world, and inference type needed, i.e. a generalized LLM or specialized model) depending on the different type of engine (in our case engine type being the type of application needed for the enterprise or government use case). The type of data ingestion and models differ for a topic based generation for a social network vs for a threat analysis detection model that focuses on an AI analyzing the data across a network and instead of a human intervening, the AI intervening on low-level threats detected that the model is trained on remediating autonomously.

By thinking about FutureAI like a gas station to make data distributable to any enterprise or government agency, even if its their own, and then take this data and have a general or specialized level of inference done based on the application needed to be built, and build and deploy the full AI-native application, we look at these as the key initial components that encompass a platform like ours: data sets (the ability to ingest unstructured data or prompt a user to opt-in to sharing their data for processing), models both generalized LLMs or specialized models, or a combination of both), enterprise and government grade privacy and security for data ingestion (both in the cloud and on-prem) paired with our own private compute infrastructure for hosting and training models, and storing data and embeddings, SDKs, and a developer playground.

This is how we look at each of part:

Unstructured Dataset Ingestion – Most datasets are like oil before they are refined into usable gasoline octanes, so they need to be cleaned and structured before we feed them to our models. You can bring an unstructured dataset to FutureAI that is your own, or we’ve built a data ingestion pipeline and SDK so you can prompt a user to connect their Gmail and Plaid data for inference for instance. In some cases you may want to combine multiple datasets and we allow for this. In the case of ingestion of Gmail and Plaid datasets we’ve built everything, but for each proprietary dataset this will require at first a build for each type of data ingestion per customer.

Models – Our platform is model agnostic, especially if you want us to host any open source model. We have an entire private compute infrastructure for hosting models privately and are creating our own proprietary model garden which today includes our topic and category based LDA model, and relationship graph model for quantifying the strength of a user’s connections. Specialized models we build will live in this model garden as we deploy solutions for each customer such as cybersecurity threat detection and remediation neural networks.

SDK – Our SDKs allow for prompting a user to securely share their Gmail and Plaid data and is like the standard gasoline nozzle at a gas station that was invented to standardize how humans connect and fill up their cars with gas. We created this to create a universal standard for developers and applications to use when they need to prompt an end-user to connect a data set to their application. Our SDKs not only provide a standardized and universal way for any application to prompt users for data, but it’s built on the same security and privacy standards that services like Gmail and Plaid use.

Privacy and Security – Our platform today allows for cleansing any dataset utilizing a score based text moderation and/or DLP for pulling out any sensitive identifiable data. Depending on latency and privacy we have the ability to implement an LLM based cleanse method to remove data on a prompt and deeper semantic level. This all depends on the need of the application.

Playground – We have a self-serve playground for developers to come in and play with our SDKs, fine-tune their prompts before deploying them and see results before implementing our SDKs at the application level.

Our third product and most important launch of last year is Polymath. We didn’t have Polymath in the cards at the beginning of the year, but as we went to market with models, SDKs and our entire platform, there was demand and a catalyst for us to own the entire implementation for the customer. When looking at Palantir's success in pioneering this model with Gotham, particularly within the government sector, this gave us the courage to follow in their footsteps after seeing the effectiveness for customers to have access to a proprietary AI platform, ontology and implementation for the customer in all one integrated solution.

We view this approach as a competitive weapon in our arsenal to launch a customer into the FutureAI AI ecosystem quickly, while further deepening our tentacles into their business in a value added and strategic way. The strategic value of Polymath extends beyond traditional implementation services for a customer. For customers, Polymath is an accelerant for any business looking to implement and build an AI application, and for FutureAI, it accelerates a businesses ramp up of their platform revenue growth with us through an increased velocity in their liftoff, driving a direct correlation to their increased data ingestion and token generation.

When analyzing a customer’s ascension, it indicates a strong correlation between the engagement of Polymath and accelerated platform revenue expansion on a per-customer basis. Additionally, we believe Polymath addresses a critical need in the market right now: the ability for commercial enterprises and government agencies to accelerate their implementation of AI into existing applications, or new solutions, without assembling extensive internal AI teams. Our hypothesis, supported by early customer demand, suggests that traditional enterprises, despite having substantial resources, will face significant challenges in independently implementing AI for this sole reason. Polymath is their answer as it bridges this capability gap, accelerating our customers' path to successful AI implementation while driving sustained platform revenue growth for FutureAI.

NOW: TIME FOR LIFTOFF

At a time when state-of-the-art frontier models are clearly showing their true worth: a commodity, and the economics of these models are undergoing a profound evolution, with development costs plummeting - on the order of multi-hundred-fold levels, this compression and transformation alters barriers to entry and competitive advantage in developing models, shifting the value to the frontier where we believe the value has always been (and where we have always been building). The market is realizing where the value is: a standardized, cohesive and universal system that agnostically weaves together multiple AI models and multiple modalities of unstructured data, at relativistic speeds, security and privacy to accelerate the integration of AI in the applications of enterprises and governments.

This cohesion and universal standard to deliver to and for enterprises and governments and thread all three of these sovereignties together, is FutureAI’s pressing market opportunity. We have been prescient in building for this unmet customer need by remaining agnostic across models and data ingestion modalities, while building for the speed, scale and optionality that customers are demanding as they race to implement AI at relativistic speeds, while AI models, and the cost to develop them, change at the same speed. Not only is the implementation of AI in an enterprise and government now considered an arms race, it will soon become a warfare component, because the increase in intelligence and economic efficiencies of one’s business and government will directly correlate with the speed at which they implement AI into their ontology. Why? Because access to a superintelligence enables you to do things that you previously couldn’t do, because AI does things outside of the range of what a human is capable of. This is where all the value is captured.

This is why we’ve patiently waited to be like The Standard Oil Company while every LLM maker makes their octane of gasoline that we allow to be delivered into the engine of any application. For us, the value for enterprises and governments is in the distribution of LLMs to the integration point, and the creating and assembly of the components needed to implement an AI application that powers their use case. On one side you have all of the makers of LLMs and octane of gasoline, in the middle you have (FutureAI) the creator and assembler of components to consume the different octanes of gasoline for your engine, and on the other side you have the output of applications that are now natively powered by AI. The creation and assembly of components in the middle which are a multi-model and multi-modal data ingestion architecture, at the scale, speed, and security necessary for enterprise and government is FutureAI.

A SMALL WINDOW AND MOMENT IN TIME

At present, there is a small window and moment and time to become the universal AI platform and standard for enterprises and governments.

Two factors play into this of which we have an opportunity to capture both by executing during this unique window:

Set Aside Contracts

  1. Currently, the closest company like FutureAI and who is even thinking on the same wavelength is Palantir.
  2. The U.S. Federal Government has a mandate to award 23% of prime contracts and be “set aside” exclusive to a class of business based on size.
  3. For our specific NAICS category, which is focused on Computer Systems Design Services, the size standard for qualifying for these set aside contracts is less than $30 million in average annual receipts over a three year period.
  4. This means that for three consecutive years the qualifying company needs to be doing less than $30m in annual receipts on average. Not $30m per year, on average, across three consecutive years.
  5. This gives us a nice glide path to capture a large number of high value set aside contracts likely over the next three to five years.
  6. This small window and moment in time is our foot in the door and race to acquire government contracts that Palantir would normally eat up, but is precluded from, not because of their ability to execute them, but simply because of their size.

AI SPENDING BY AGENCIES OF THE UNITED STATES FEDERAL GOVERNMENT

$6.09 Billion for AI Prime Contracts Awarded by 10 Agencies 2019 - 2024

Source: Artificial Intelligence: US Spending

AI Prime Contract Awards by Month 2019 - 2024

Awarded $ Amount By Top 10 Agencies

Source: Artificial Intelligence: US Spending

$3.4 Billion in AI Spending in 2025 By 5 U.S. Agencies

Source: Artificial Intelligence: US Spending

EXECUTIVE ORDER (EO) 14141 WILL ONLY ACCELERATE THIS SPENDING

On January 14, 2025 The Trump Administration issued Executive Order (EO) 14141 do mandate:

“(a) Within 180 days of this order… develop and submit to the President an action plan to achieve the policy set forth in Section 2 of this order to… advance United States national security and leadership in AI. Meeting this goal will require steps by the United States Federal Government, in collaboration with the private sector, to advance AI development and use AI for future national-security missions, including through the work described in National Security Memorandum 25 of October 24, 2024 (Advancing the United States' Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives).”

With the new Trump administration, and this executive order in place with its primary purpose and objective to advance United States national security and leadership in AI, we believe there will be a surge in government spending on AI over the next four years, and we are prepared to be well positioned as the company to become the leader in this space.

To do so we have set aggressive, but disciplined, contract based goals that will enable us to grow revenues based on winning contracts and accelerate growth that are in line with signed customers and committed revenue dollars.

FIRST TO MARKET

While I don’t usually believe in the first to market competitive advantage in most industries, the first to market premise is paramount here. As commercial enterprises and government agencies start to bet on an AI platform, other entities will follow and begin to bet on this platform because it’s vetted. So we are prepared for this and ready to capitalize the business to capitalize on this opportunity.

WHAT TO LOOK FORWARD TO IN 2025

For the first time ever, is the government sector moving faster on innovation than the private sector. This is because the government understands something better than any other type of entity in the world: weapons. It’s my hypothesis that the private sector still hasn’t grasped the core meaning of AI. This is from a number of conversations with the largest private sector companies last year.

Maybe it’s because it started out as a chatbot. Most likely it’s this reason, but won’t most admit it: AI fundamentally can improve your margins, while driving a better customer experience. For the first time ever since the dawn of the internet is this a fact where revenues can increase while costs can decrease. Ask any CTO of a private sector internet company in a private conversation their true belief of the effects AI can have on their business, if they’re being honest, most will tell you that it can cut in half their merchandising, content creation, fraud, engineering, design, and customer support teams, every single team in the company, and eventually completely replace them.

What they won’t say publicly is they don’t want to turn their peers in the C-suite into a firing squad and their CEO doesn’t want to be the first company to reduce headcount as it will signal weakness. Why do I believe this?... Because even Bill Koch’s Oxbow Carbon is using AI and rightfully touts “Oxbow is the first calciner in the world to use true AI to optimize kiln performance and improve product homogeneity. We have harnessed AI to allow the kilns to autonomously adjust settings to create the exact product our customer requires. By the end of 2024, all three of our domestic calciners will utilize AI.” When you have a company like Oxbow, and Bill Koch, one of the most old school industrialists using AI, you know it’s for one reason and one reason only: they can produce their product better, faster and cheaper by using AI.

This is why we completely turned our focus to government contracts at the end of the 4th quarter 2024. This is the best ideal customer profile you can wish for:

  1. You don’t have to convince them of adopting your technology, they are already seeking it and buying.
  2. They have a large budget for it, in fact the largest budget in the history of buying any product ($75.13 billion).
  3. They aren’t going anywhere as a customer and contracts with them as a customer will grow over time.

So, if you want to know our strategy going forward, under the hood we’re not changing a single thing of how we’ve thought about positioning ourselves for the long-term, and will continue building the same infrastructure we’ve been building stealthily to put superintelligence in the hands of every single person, small business, commercial enterprise, school and government in the world to accelerate their own mission.

In fact, we have more conviction than ever, it's only now we’re able to see the picture more clearly and are doubling down on our strategy. What gave us this conviction? It’s when we started to think about AI as a renewable resource like oil, and just like when oil drillers thought all of the value was in drilling for oil, it was only shortly after that did John D. Rockefeller have a vision much bigger for Standard Oil than just refining oil, he understood the real power wasn’t in drilling for oil (like many others were focused on, and just like today most are focused on building frontier LLM models, and “drilling for oil”), but in controlling the entire infrastructure system.

As everyone else was drilling, Rockefeller instead focused Standard Oil on creating the first national distribution network, and modern gas station concept, to make oil useful and accessible. We look at oil pipelines just like data pipelines to deliver data to AI models that others create and that we make accessible and distributable through application facing components. Our SDKs are the “AI gas stations” of today, and we are creating the “standard” for how applications utilize AI. This is how we, and you, should think about FutureAI, and the Standard Oil for AI.

Photo of Rockefeller

Don't blame the marketing department.

The buck stops with the chief executive.