CFA Level 1 Chapter | Total Questions | Correct (gpt-4o) | Correct (sonnet) | Correct (o1) |
Ethical and Professional Standards | 126 | 74.60% | 73.81% | 82.54% |
Quantitative Methods | 91 | 75.82% | 75.82% | 94.51% |
Economics | 84 | 88.10% | 90.48% | 94.05% |
Financial Reporting and Analysis | 160 | 78.13% | 78.75% | 91.88% |
Corporate Finance | 67 | 79.10% | 71.64% | 89.55% |
Equity Investments | 85 | 74.12% | 69.41% | 94.12% |
Derivatives | 43 | 74.42% | 83.72% | 95.35% |
Fixed Income | 89 | 73.03% | 71.91% | 88.76% |
Alternative Investments | 37 | 83.78% | 81.08% | 97.30% |
Portfolio Management | 57 | 77.19% | 75.44% | 85.96% |
Total | 839 | 77.47% | 76.76% | 90.70% |
I used OpenAI o1 reasoning LLM to review the EU AI Act.
It took o1 only 18 seconds to think. It took me 2 minutes to convert the PDFs to nicely formatted text.
My prompt is “Give me the top 3 logical contradictions in the EU AI Act I've pasted here: TEXT”
Here is the o1 response:
1. A "risk-based" approach that isn’t always risk-based:
The proposal emphasizes a proportionate, risk-based regulatory model, claiming that requirements scale with the potential harm of an AI system. However, the classification of high-risk AI systems is often triggered by the sector or intended use listed in Annex III—regardless of actual, demonstrated risk levels. This means certain AI applications are subject to stringent obligations purely because they fall into a predefined category, contradicting the idea of truly tailoring requirements to the real and context-specific risks posed by the technology.
2. Encouraging innovation while imposing costly compliance burdens:
The Act states it aims to foster innovation and support small and medium-sized enterprises by providing regulatory sandboxes and other guidance. At the same time, it imposes stringent conformity assessments, continuous monitoring, and record-keeping obligations for high-risk AI systems. These demands can become a heavy financial and administrative load, especially for smaller providers. This tension between claiming to nurture innovation and simultaneously mandating complex, resource-intensive compliance contradicts the stated objective of reducing barriers and encouraging market entrants.
3. Harmonizing the single market but layering multiple standards and bodies:
The proposal seeks to create a uniform framework to prevent market fragmentation, ensuring that AI systems can freely circulate throughout the EU. Yet, it references and depends on numerous existing and future sectoral regulations, the involvement of multiple notified bodies, and potential reliance on a patchwork of technical standards and common specifications. As a result, developers and users may navigate a complex web of overlapping rules and authorities. This complexity undercuts the promise of a clear and harmonized single market for AI, creating, in practice, a contradictory situation where the pursuit of uniformity may instead lead to confusion and uneven application.
I’d love to run this prompt in o1 with all government documents.
One of the best things I did this year was to take a 10-week stand up course by comedian Logan Murray.
It culminated in a live performance before 100 people at the Museum of Comedy, which is located next to the British Museum in London.
When I invited close friends and colleagues to the show, most were shocked. “That is so scary. Why would you do that?” they wondered.
I didn’t do it for the reasons many do. I wasn’t trying to improve my public speaking skills or reduce stage fright. I certainly wasn’t preparing for a side hustle or career transition.
It was more philosophical. Humour is so fundamental to human nature. I wanted to understand what makes a joke funny and why people laugh. How is humour related to curiosity and inquisitiveness?
I knew my friends would understand once they saw the show.
They are all curious, intelligent people. What I love most about them is that they share a sense of humour. They don’t take themselves too seriously. Over the years, we’ve bonded over laughter far more often than over food, drinks or holidays.
There is something about “sharing a laugh” that’s both primitive and profound.
I took the stand up comedy course to learn about this eternal form of human bonding.
Over 10 weeks, I studied with a group of total strangers. There were three semi-pro comedians, one ex-talk show host, a former concert singer, one private equity analyst, an entrepreneur, a former professor, a body guard and me.
We were taught key comedic concepts of status, spontaneity and storytelling. We critiqued each other’s performances. Everyone had a distinct style. We rarely talked about what we did in “real life.”
It felt strange to know so much about each other and yet nothing at all.
But it was a wonderful bonding experience and a beautiful memory.
Like the memory of visiting Greenland with total strangers in 2016.
]]>
Mathematical analysis was my nemesis.
I had nightmares of “infinitesimal quantities”.
One of the biggest shocks of my life 25 years ago set in motion the chain of events that led me to start a financial AI company, SigTech.
It starts with my roots growing up in Shanghai, China’s largest city with 25 million inhabitants.
It’s a vibrant place filled with entrepreneurial energy. In Beijing, when people go out, they gossip about politics. In Shanghai, people talk about money.
When you grow up in China, you get used to the idea that whatever you do, a million other people are trying to do the same thing. You are constantly under pressure to compete because it’s drilled into your brain that no one's going to give you anything. You have to earn it.
You have to be an excellent student and go to excellent schools to get an excellent job. That’s the way it works. So, I worked hard to get good grades and won national Olympiads in math and physics. Being a top student was my identity.
When I was given the chance to choose any major in any university in China, I chose to join a class of 20 students of “special talent” assembled at Shanghai Jiaotong University, one of the world’s top universities.
I quickly realized I was not going to be the best. In the mathematical analysis class I took, I was ranked in the bottom half.
It was shocking!
Even worse, no matter how much effort I put in, I simply lacked intuition for “epsilon”.
At the time I thought it was inconceivable. It challenged my self-identity. If I can’t be the best in math, then who am I? What will I do with my life?
It was a critical period and it prompted me to start to think more about what I wanted, instead of what I was supposed to want. It’s not a common thing in China.
The reflection prompted me to take a different path than I had planned, eventually applying to Cambridge University for a doctorate, and working in finance in London afterwards.
Those lessons helped me, as a computer scientist, transition into broader roles, first at Barclays Capital and then at Brevan Howard.
I developed a sense of independence, fostered by the variety of jobs. When people told me to do something, I was less likely to blindly say yes. Instead, I asked more questions.
Looking back, the thing that probably contributed most to my professional development was being forced to sit in that room of 20 other students at university and acknowledge I wasn’t ever going to be at the top of the class.
Sometimes it takes realizing you aren’t as good at the game as you thought, to change your mind about the game you want to play.
Like a hunting cheetah in the Kruger National Park, South Africa, 2018
]]>
Everyone wants a customer to say YES.
I want a customer to say NO.
A salesperson wants affirmation from a customer. He goes into a meeting to pitch: look at our amazing product with amazing features; how it can help your use cases; how it can change your life.
The customer smiles: “Very interesting! Definitely something I’d consider. Keep me updated.”
The salesperson is pleased, thinking that the customer loves it and he’ll close the deal soon.
But it will never happen.
The reason it never happens is because it’s not a real YES. It was a fake YES by the customer to escape.
We are taught to be polite. It’s considered bad manners to bluntly reject someone who’s eager to please, even a salesperson. We learnt not to burn bridges, just in case one day we may change our mind.
So we reject pitches by giving people seemingly positive feedback and approval. Not too much, just enough to end the conversation.
We kick the can down the road. We want to soften the blow.
When I meet a customer, I look for a NO.
I notice that people often know what they don’t want far better than what they want. They articulate NO much more clearly than YES. A NO is always concrete.
I met a client recently with my salesperson. His pitch was energetic and convincing, covering all the bases from data quality, different asset classes to templates for option strategies.
The client was nodding and smiling. He said he would love to try it if he had more time.
I could see that this was not a real YES. I wanted to get from him a concrete NO.
I said: “Even with all these tools, it’s not easy to research option strategies, is it?”
His eyes lit up: “It’s so hard. We have to focus on ideas that make money. I don’t know the option space that well.”
I got it. I admired his courage to admit that the bottleneck wasn’t necessarily the product, but his own expertise.
I offered a suggestion: “We recently launched an AI agent that can read the description of a trade idea in English and automatically implement it in Python. Will this help you test more ideas?”
His body language softened and he asked: “How do you price it?”
In my experience, the only reliable sign of a real YES is when a client asks about the price.
People love saying NO because it makes them feel safe and grounded.
Often we just need to encourage them to voice NO in order to dig deeper into specific issues and topics.
That may be the only way to get to a real YES.
* A powerful black rhiNO in the Kruger National Park, South Africa, 2018
]]>
Clients often ask me why we named our company SigTech.
I explain the first part of our name has two origins: “Sig” is short for signals. We are in the business of providing technology so investors can find signals in the market.
It is also a nod to our history: “SIG” was the acronym for Systematic Investment Group, the team I led when I worked at the hedge fund Brevan Howard.
When “Sig” is combined with “Tech” it explains our mission: to provide leading technology to help investors distinguish “signals” from the “noise”.
The idea of the business was first conceived while I was hiking Inca trails to reach Machu Picchu in late 2017.
In 2019, we officially started out offering services to buy-side portfolio managers at hedge funds, pension funds, and sovereign wealth funds. We are now expanding that effort by leveraging generative AI to make those advanced decision-making tools available to anyone in capital markets.
I’m passionate about democratizing access to data and information in part because I came to the world of finance unexpectedly.
After completing my undergraduate degree in electrical and electronics engineering at China’s Shanghai Jiaotong University in 2003, I began doctoral studies in computer science at the University of Cambridge.
That gave me the opportunity to help research and develop Xen, the virtual machine monitor that ushered in the era of cloud computing when it enabled Amazon to build AWS.
In 2007, after my PhD studies, I went to work at Barclays Capital in quantitative investment strategy trading, and in 2015, Brevan Howard where I later became the chief investment officer running the Systematic Investment Group.
Brevan Howard puts an emphasis on intellectual honesty. The culture encourages employees to analyze trades to determine how much of our success was due to luck versus skill. In the past two decades, the firm has generated over $30 billion in returns for investors.
In February 2019, Brevan Howard spun off SigTech as a fintech business supporting other firms making data-driven investments.
This process of distilling data into signals is challenging. You must overcome a high degree of background noise in the data. But this distillation is core to the investment process.
Once you find a signal, you need to determine if it will accurately predict an outcome significantly better than the market. You also have to decide if you can structure trades to monetize your prediction so that your upside is greater than the downside.
Solving those problems is where generative AI comes in, and it’s a game-changer.
At SigTech we expect the technology we are building to change how decisions are made in capital markets.
I sat down with Eddie Donmez, the founder and CEO of Creative Capital, to talk about my career and AI at SigTech:
In the latest Creative Capital interview, Eddie Donmez sits down with SigTech founder and CEO, Bin Ren. Bin has a crazy story. Before founding SigTech, he worked on the code that became what we know today as Amazon Web Services (AWS) and worked for one of the most famous hedge funds in the world Brevan Howard. After working closely with Alan Howard, he spun SigTech (Systematic Investing Group) out of Brevan to provide cutting-edge technology to hedge funds, sovereign wealth funds, and endowments. He has now developed a game-changing generative AI tool for all investors - SigTechAI. It offers access to premium financial data, performs accurate portfolio analysis, and endless tools for investing. No math. No code. In the interview, we discuss the implications of AI in finance and how it will prove to be a game changer for finance, investing, wealth management, and more.This is a MUST LISTEN for any investor or student of the game.
You can watch the whole interview here on YouTube.
]]>
I had one of my favourite conversations with Jay Singh, CEO and founder of Casper Studios, on his AI Founders Podcast:
In this episode, I spoke with Bin Ren, CEO and founder of SigTech, a fintech platform that combines AI and quantitative analysis for market professionals. Bin shared his journey from Shanghai to London, discussing how cultural experiences shaped his entrepreneurial mindset and the importance of balancing exploration and exploitation in creativity.
We also explored the transformative potential of generative AI in finance, particularly its ability to integrate numerical analysis with textual insights. Bin emphasized the need for intellectual honesty in assessing both successes and failures, and he articulated a vision for using advanced reasoning models to enhance trade structuring, aiming to revolutionize traders' strategies. This conversation highlighted the exciting intersection of technology and finance.
You can listen to it here, on Spotify or Apple Podcasts.
I had the pleasure of speaking with James Jockle, EVP and CMO of Numerix on the Trading Tomorrow podcast:
Unlock the future of finance with Bin Ren, Founder & CEO of SigTech, as he reveals the transformative potential of AI in capital markets. Discover how AI is revolutionizing financial decision-making processes by enhancing productivity tools for professionals in investment management, trading, and risk management. Learn about the critical role of a robust data foundation in building AI-driven systems and the intricate stages of pre-training and post-training large language models. Bin shares practical examples to illustrate how AI can swiftly process and summarize complex information, potentially altering how financial decisions are made.
You can listen to it on Buzzsprout, or Spotify or Apple Podcasts.
]]>I had the pleasure to sit down with Alex Miller from Citi’s Global Insights team and Ronit Ghose, Citi’s Head of Future Finance to discuss the emerging role of artificial intelligence in global banking on Good Things Happen podcast:
In the series finale of Good Things Happen, we joined Citi’s 11th annual Digital Money Symposium in London which took place on 20th June, to be inspired by thought leaders across the financial services ecosystem, discussing ‘the future of finance and money’.
Guest host Alex Miller, from Citi’s Global Insights team is joined by Bin Ren, Founder & CEO of Sig Tech and Ronit Ghose, Citi’s Head of Future Finance to discuss the emerging role of artificial intelligence in global banking.
Bin Ren tells us about the autonomous AI agents his company is building, including those able to pass the Chartered Finance Analyst exam at 90 percentile performance. He describes the opportunities he sees to use AI to scale the industry, boost productivity and enhance jobs.
Ronit Ghose, joins the podcast to talk through AI and the considerations and regulations being put in place to avoid and mitigate risks such as deep fakes and AI hallucinations.
You can listen to it here.
]]>
I had the pleasure of speaking with Corey Hoffstein, the co-founder and Chief Investment Officer of Newfound Research, on Flirting with Models podcast:
You can listen to it here, on Spotify or Apple Podcasts.]]>In this episode I speak with Bin Ren, founder of SigTech, a financial technology platform providing quantitative researchers with access to a state-of-the-art analysis engine.
This conversation is really broken into two parts. In the first half, we discuss Bin’s views on designing and developing a state-of-the-art backtesting engine. This includes concepts around monolithic versus modular design, how tightly coupled the engine and data should be, and the blurred line between where a strategy definition ends and the backtest engine begins.
In the second half of the conversation we discuss the significant pivot SigTech has undergone this year to incorporate large language models into its process. Or, perhaps more accurately, allow large language models to be a client to its data and services. Here Bin shares his thoughts on both the technical ramifications of integrating with LLMs as well as his philosophical views as to how the role of a quant researcher will change over time as AI becomes more prevalent.
Let’s start by talking about “low-background steel”, a story that began in the shadow of World War II, a silent witness to a seismic shift in human history.
This unique form of steel was produced before the advent of nuclear weapons in the 1940s and 1950s. The detonation of the first nuclear bomb in July 1946 marked the beginning of widespread nuclear fallout, which introduced radionuclides like cobalt-60 into the atmosphere. These particles, carried by the air, contaminated materials across the globe, including steel produced post-nuclear era.
The significance of low-background steel arises from its scarcity and its critical role in scientific research, especially in fields where precise detection of radiation is paramount.
It became an irreplaceable resource for constructing sensitive instruments, such as Geiger counters and equipment for photonics research. Because of its production date, it lacks the radioactive contamination that post-war steel possesses, thus enabling more accurate readings in radiation detection and research.
The dwindling supply of low-background steel has led to intriguing stories, such as salvagers seeking out World War II-era shipwrecks, which are speculated to have been targeted for their low-background steel.
Now let’s talk about “low-background tokens”. These tokens are not crypto currency, but the input data used to train Large Language Models (LLMs). As the digital atmosphere burgeoned with AI-generated content, the introduction of OpenAI ChatGPT in November 2022 can be likened to a nuclear event for data.
The tokens from before this proliferation are untainted by the recursive feedback loop of AI creations—they are the low-background tokens of our time, a pristine dataset free from the echo of AI's own voice.
The significance? With the exponential growth of AI, the undisturbed datasets from before this 'detonation' of AI-generated content are a dwindling resource. Their value lies in their untouched provenance, offering a baseline for training future AIs with original, unaltered human input.
As we gradually lose access to these tokens, we face a potential future where AI's training is a reflection not just of original human thought, but of a recursive, self-referential digital creation. The impact could be profound, influencing how AI understands and interacts with the world, based increasingly on its own generated output rather than pure human output.
Our challenge is to recognize the worth of these low-background tokens and to strategize their use wisely. They are the bedrock of authenticity in a sea of synthesized information, enabling us to train AIs that can genuinely comprehend the human values: untainted and true.
]]>
A recent leadership shift at OpenAI raises pivotal questions about Artificial General Intelligence (AGI) and its governance. Sam Altman's departure as CEO follows diverging views within OpenAI's board on what AGI truly means and how to handle its potential arrival.
According to OpenAI, AGI is a "highly autonomous system that outperforms humans at most economically valuable work”. However, Altman recently set a higher bar for AGI, including the "discovery of new types of physics" during a recent talk at the Cambridge Union, England.
This definition matters. With OpenAI's partnership with Microsoft, the tech giant has access to AI models below the AGI threshold.
The broader Altman's AGI definition, the more advanced AI models fall into Microsoft's ambit, as they're classified as pre-AGI.
Internal developments suggest OpenAI might be closer to AGI than expected, leading to a strategic divide.
While Altman seemed inclined towards broader distribution, including Microsoft's utilization, others, like Ilya Sutskever, responsible for AGI alignment, favored a more cautious approach.
This clash of visions — between unleashing potential AGI advancements and ensuring rigorous safety and alignment — might have catalyzed Altman's exit.
The OpenAI board, committed to "safe AGI that is broadly beneficial”, had to make a tough call.
Stay tuned as the event continues to unfold.
]]>
Welcome back to our series on scaling Large Language Models (LLMs)! Following our discussion on how compute trumps human heuristics, let's delve into Reinforcement Learning (RL), where this principle is vividly exemplified. RL is a method of training AI by rewarding desired behaviors and learning from interaction.
1. Success of AlphaGo:
AlphaGo's triumph was primarily due to combining deep neural networks with Monte Carlo tree search (MCTS). It learned from human game records and then through self-play, leading to superhuman Go strategies.
In March 2016, AlphaGo beat Lee Sedol in a five-game match, the first time a computer Go program has beaten a 9-dan professional without handicap.
2. From AlphaGo to AlphaZero:
AlphaZero represented a major leap, learning entirely through self-play without prior human game data. This approach was applied to not only Go but also chess and shogi (Japanese chess), showcasing a shift from specialized to generalized AI learning systems.
3. From AlphaZero to MuZero:
MuZero extended AlphaZero's capabilities to games with unknown dynamics, learning from environmental interactions without needing game rules. It combined a world model with deep learning and MCTS, advancing towards AI that understands and interacts with complex environments.
4. RL Plays a Central Role in LLM Alignment:
RL is crucial in LLM alignment, particularly in Reinforcement Learning with Human Feedback (RLHF), where LLMs are refined using human feedback for alignment with ethical guidelines.
Looking ahead, RL could significantly enhance LLM training and inference, a topic we'll explore in our next post. RL might guide LLMs towards more accurate, context-aware, and ethically aligned outputs, heralding a new era in AI innovation.
]]>
Welcome to our exploration of Q*, a groundbreaking development in AI, particularly in scaling Large Language Models (LLMs).
In short, Q* makes LLMs PONDER.
1. What Q* Represents:
Q* stands for a new technique that leverages reinforcement learning (RL), akin to MuZero-style RL by Google DeepMind, to enhance LLMs' capabilities. It's an approach where the model learns optimal behaviors through a process similar to trial and error.
2. How Q* Works:
Q* adopts MuZero-style RL, using Monte Carlo Tree Search (MCTS) combined with a reward function learning mechanism. This allows Q* to simulate a vast array of potential actions (predictions of tokens) and learn from the best outcomes, greatly improving its decision-making process in multiple steps.
For example, a LLM can predict each token 100 times and then learn which sequence of tokens ultimately leads to the best outcome (the highest reward).
3. Mathematics as a String Manipulation Game:
In the realm of formal mathematics, systems like ZFC are built on "starting strings" (axioms), "string generators" (axiom schema), and "string manipulation rules" (laws of inference).
The goal is to use these rules on starting strings or strings generated from templates to produce a target string, known as a theorem.
This framework aligns well with how Q* approaches mathematical problem-solving, treating math as a game of string manipulations.
4. Q* and Mathematical Problem Solving:
Q* generates a 'tree of thoughts' by predicting the next tokens multiple times, akin to exploring different branches in MCTS. Each branch represents a potential solution path, and the model learns to effectively navigate this tree with the feedback on the correctness of each solution.
5. Trading Off Inference Time for Quality:
Q* trades increased inference time for enhanced quality of outcomes. By spending more time analyzing each decision (e.g. predicting the next token 100x), Q* achieves a level of inference quality that rivals much larger models.
6. Small Model, Big Impact:
This strategy enables a smaller model to deliver the performance of a significantly larger one. It's an efficient way to scale up the capabilities of LLMs without proportionally increasing their model size, data size and compute.
7. Overcoming Data Scarcity with Synthetic Data:
Intriguingly, Q*'s approach of learning from its best predictions is akin to training with self-generated synthetic data. This method effectively addresses one of the major challenges in scaling LLMs: data scarcity.
By generating and learning from its own predictions, Q* paves the way for more efficient and scalable AI models, marking a new era in AI research and applications.
]]>
All LLMs today need Retrieval Augmented Generation (RAG). Why? To access private knowledge and to work around the small context window.
GPT-4 is the state-of-art reasoning engine that was trained on a vast public corpus of human knowledge across numerous domains.
But it has NO access to our private knowledge in the form of proprietary documents, internal chat history, video conference transcripts, customer information and commercial contracts etc. It’s also RARELY possible to inject the entire private knowledge into the context window which is limited to 128k.
For a given query, RAG works by doing a semantic search of the corpus of private knowledge and retrieving only the relevant parts as the context for a LLM.
However the latest announced RAG in GPT-4-Turbo doesn’t work out of the box for most applications in finance because:
1) One has to manually upload up to 20 files, with each file limited to 512MB.
2) Only a few file types are supported (e.g. pdf, csv, txt)
3) There is no connectivity to any data source (e.g. database, data lakes, document stores).
4) The cost of storing these files is very steep: $0.20/GB/agent/day, which is about 260x of AWS S3 standard pricing at $0.023/GB/month.
Also. LLMs in general (including RAG)l are fundamentally terrible at time series analysis.
At SigTech we combine the state-of-art tools and data sets in finance with LLMs to maximise the productivity of our human users.
]]>
Welcome back to our series on scaling Large Language Models (LLMs)! Following our exploration of multi-modality's impact on enhancing LLM learning, let's dive into another pivotal aspect: the role of compute.
- The Dominance of Generic Training Methods:
Picture a scenario where you're trying to optimize a complex system. Initially, you might apply specialized rules based on human understanding, akin to using a detailed map for navigation.
However, as computational power grows, a more effective approach emerges: generic training methods that leverage this compute power. It's like switching from using a map to a GPS that continuously learns and updates the best routes.
In AI, this principle holds true: Generic training methods with more compute always trump human-crafted heuristics.
- Embracing Complexity in AI Development:
A key insight from our exploration of compute is recognizing the immense complexity of human cognition. Unlike simple models of space, objects, or agents, human thought processes are deeply intricate.
In scaling LLMs, we aim not to encode these complexities directly but to develop meta-methods that enable AI to discover and navigate this complexity on its own. It’s about equipping AI to find patterns and approximations in data as humans do, rather than pre-loading it with our existing knowledge.
This approach allows AI to evolve and adapt in ways that mimic human discovery and learning.
- Real-World Example of AlphaGo to AlphaZero by Google DeepMind:
Initially, AlphaGo learned from human players, much like a student learning from a textbook. But AlphaZero changed the game. It learned by playing against itself, akin to a student who learns not from books, but by experimenting and discovering new knowledge independently.
This shift from human-guided learning to self-exploration and self-improvement showcases the power of computation in AI development.
- The Future of AI:
Envision a world where AI can not only learn from what it's been taught but can also innovate and discover new ideas, much like an artist who evolves from imitating others to creating their unique style.
This future of AI, where originality and creativity flourish, is powered by the relentless growth of computational capabilities.
]]>
- Predicting the next token is understanding:
Large Language Models (LLMs) like GPT-4 don't just learn languages; they learn about the world. By predicting the next token in a vast array of texts, these models gradually build a 'world model'.
This means they're not only understanding language structure but also grasping the complex web of human knowledge, behavior, and societal norms. Essentially, they learn how the world works, one word at a time.
Now, imagine you're deeply immersed in a detective novel, rich with clues, complex characters, and twisted plots. The story builds to a climax where the detective declares, "Now I'm going to reveal the name of the murderer, and it is ___"
If an AI model can correctly predict the next word that fills this blank, it's showing an understanding that extends far beyond language. To accurately complete this sentence, the AI must understand the entire novel - every plot twist, character arc, and subtle hint.
This analogy vividly demonstrates how predicting the next word in a sequence requires and reflects a profound understanding of the context.
- Enhancing AI's Perception with Visual Data:
Over 50% of our brain's cortex is dedicated to visual processing, highlighting the importance of visual information in human understanding. Similarly, when AI incorporates visual data, it undergoes a transformative shift in comprehension.
A compelling example is how LLMs, despite not having 'seen' a single photon, gradually develop an understanding of colors. This information is indirectly 'leaked' into the AI's learning through the vast textual data it processes.
This process mirrors how human understanding of concepts like color can be shaped through descriptions, even without direct visual experience.
But by integrating visual data, AI's learning can be dramatically accelerated, much like adding a powerful new sense to its existing capabilities.
- Videos: The Next Frontier:
Incorporating videos into AI's learning process adds the dimension of time and motion. Videos help AI understand how objects and entities interact and change over time, following the physical laws of our universe.
It's the difference between a static picture of a bird and a video showing the fluid motion of its flight. By learning from videos, AI not only recognizes but understands the dynamics of the world around us, completing its transition from static observer to dynamic participant.
In this way, AI's progression from text to imagery, and eventually to videos, reflects a deepening and broadening of its understanding, paralleling the multi-faceted way humans perceive and interact with the world around us.
]]>I had the pleasure to speak with Hitel Patel, partner, global head of financial infrastructure, technology, and services at Oliver Wyman, on Innovators' Exchange podcast:
You can listen to it here, or on Spotify and Apple Podcasts.In the latest episode of Innovators' Exchange, Hiten Patel speaks with Bin Ren, the CEO and co-founder of SigTech, a leading provider of quant technologies. The discussion centers around democratizing access to quant trading strategies, the surge of retail investing, and the profound implications of generative AI (Gen AI) in the financial markets.
Bin unveils the story behind SigTech's mission to accelerate the idea-to-market process in capital markets. Their focus on reducing the lifecycle of ideas from months to seconds empowers traders and portfolio managers, democratizing the landscape for both professionals and retail traders. The company's cutting-edge technology, featuring Gen AI, enables quick generation, testing, and deployment of trading ideas, contributing to a more efficient and accessible market.