Biz and Tech Podcasts > Business > The Lindahl Letter
Thoughts about technology (AI/ML) in newsletter form every Friday
nelslindahl.substack.com
Last Episode Date: 18 January 2025
Total Episodes: 116
Thank you for being a part of the journey. This is week 177 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “Your valuable attention: Why Your Focus Is Under Siege.”In a world where your attention is more valuable than ever, every scroll, click, and swipe is part of an invisible economy. This “attention economy” drives social media platforms, streaming services, and even productivity tools. It’s not your time they want—it’s your focus. The cost of lost attention is both personal and societal. On an individual level, fragmented focus lowers productivity, weakens relationships, and diminishes a sense of purpose. On a societal scale, the effects ripple outward, creating polarization, misinformation, and a culture that values busyness over depth. Occupied time is not always productive. We have to move to strengthen the fabric of civil society. It’s our general civility that has become unsettled.The statistics are startling. The average person now spends over seven hours daily consuming digital media. We are focused on digital driving through a forever updating sea of digital content. Notifications, pop-ups, and infinite scrolls have rewired our brains and expectations to crave constant stimulation, sadly leaving little room for deep thought or creativity. The attention span of the modern human is estimated at just 8.25 seconds—shorter than that of a goldfish [1]. This isn’t an accident; it’s by design. Technology companies have mastered the art of capturing your focus. Every feature on your favorite app, from autoplay videos to personalized algorithms, is crafted to keep you engaged for as long as possible. The longer you stay, the more data they collect and the more ads they show. Attention has become the currency of the 21st century, and you’re the commodity. People have been saying that attention is the new oil for about 7 years [2][3].Your attention is the gateway to everything you value—learning, relationships, civility, and achieving your goals. Without the ability to focus, time slips away unnoticed. Productivity declines, creativity dwindles, and even happiness suffers. The constant pull of distractions chips away at your ability to live intentionally. Yet, understanding the problem is the first step to regaining control. When you recognize that your attention is being diverted, you can begin to take deliberate steps to reclaim it.The attention economy thrives on a simple premise: the longer you stay engaged, the more valuable you are. Algorithms study your habits, preferences, and vulnerabilities, ensuring that the content you see is optimized to keep you scrolling. But the effects go beyond wasted time. In the workplace, frequent interruptions reduce productivity and lead to decision fatigue, costing billions in lost output annually. In personal relationships, divided attention weakens connections, leaving friends, partners, and colleagues feeling undervalued. On a mental health level, the endless cycle of notifications and comparisons fosters anxiety, burnout, and a distorted sense of self-worth. It feels good to feel busy, but that does not translate to actual outcomes.The good news is that you can fight back. Reclaiming your attention starts with awareness. Recognize when and where your focus is being pulled, then take actionable steps to protect it. Turn off non-essential notifications; your phone doesn’t need to buzz for every like, comment, or update. Set digital boundaries using tools like screen time trackers or app blockers to create intentional limits. Schedule time for focused, uninterrupted work on meaningful tasks. Most importantly, reconnect with presence during conversations and relationships. Put away your devices and engage fully.Your attention isn’t infinite, but it is powerful. By reclaiming control, you can transform your relationship with technology, your work, and the people in your life. The battle for your attention isn’t just a personal challenge—it’s a societal one. As individuals, we must learn to resist the pull of distractions. As a society, we must demand ethical technology that respects our focus rather than exploits it. Your focus is your greatest asset. Don’t let it be stolen. One of the big changes that I made was shifting to a fitness ring instead of allowing alerts on my wrist from a watch. For me those wrist alerts shattered my efforts to achieve deep work and sustain focus. Sometimes you just need to focus and those alerts, notifications, or messages just need to wait a little bit in the attention priority queue.Footnotes:[1] I’m not entirely sure this citation is the best source for this metric, but it does seem to be commonly cited and is from 2015 Time magazine https://time.com/3858309/attention-spans-goldfish/[2] https://www.google.com/search?q=%22attention+is+the+new+oil%22[3] https://medium.com/@setsutao/attention-is-the-new-oil-not-data-bf54c64d3279What’s next for The Lindahl Letter?* Week 178: Inside the Mind: The Science of Focus and Distraction* Week 179: Designed to Distract: How Technology Grabs Your Attention* Week 180: The Focus Formula: Prioritize What Truly Matters* Week 181: Your Attention Fortress: Building a Distraction-Free Life* Week 182: Deep Work, Rare Results: The Art of Uninterrupted FocusIf you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit nelslindahl.substack.com
Quantum computing continues to captivate the imagination of scientists, technologists, and futurists alike, offering the promise of solving problems intractable for classical machines. Amidst the steady stream of breakthroughs, one concept has emerged with both scientific intrigue and practical potential: time crystals. These exotic states of matter, once considered the stuff of theoretical musings, are now taking shape in laboratories and, intriguingly, hold promise for quantum computing applications.At their core, time crystals are a new phase of matter, one that breaks time-translation symmetry. In classical physics, symmetry breaking usually refers to spatial phenomena—such as ice forming from water, where the uniformity of liquid water transitions to the structured lattice of solid ice. Time crystals, however, add a temporal twist: they exhibit periodic motion that persists indefinitely without energy input, defying classical expectations. Discovered in 2012 by Nobel laureate Frank Wilczek as a theoretical construct and experimentally realized in 2016, time crystals are not perpetual motion machines but rather quantum systems that oscillate in a stable, repeating pattern under the influence of an external driver.For quantum computing, time crystals offer a tantalizing prospect. They provide a platform where quantum states can be maintained with high coherence—essential for reliable quantum computation. Time crystals are inherently non-equilibrium systems, making them robust against many types of environmental noise. This resilience could address one of the major hurdles in quantum computing: error correction and qubit stability. A significant step forward was the recent use of time crystals in trapped-ion quantum computers, where researchers demonstrated their potential for executing quantum gates. By leveraging the stable periodicity of time crystals, quantum systems can operate in an environment that naturally mitigates decoherence, effectively improving the reliability of computations.Recent advances have seen time crystals moving from theoretical oddities to functional components in experimental setups. For instance, researchers using Google’s Sycamore processor observed time-crystal behavior, showing how these systems can be integrated into existing quantum hardware. Similarly, trapped-ion systems have demonstrated the potential of time crystals to enhance the coherence of qubits, making them candidates for long-term storage and high-fidelity operations. Additionally, their unique oscillatory states could play a role in synchronizing quantum systems across distributed networks, paving the way for scalable quantum communication.Despite these exciting prospects, integrating time crystals into practical quantum computing remains a challenge. Their behavior, while stable, is highly sensitive to precise conditions and external drivers. Scaling these systems to handle complex quantum algorithms will require significant advancements in both hardware and theoretical understanding. Furthermore, the interplay between time crystals and other emerging quantum technologies, such as topological qubits and error-correcting codes, remains an open field of inquiry. Bridging these domains could unlock entirely new architectures for quantum computation.The journey of time crystals from a theoretical prediction to an experimental reality is a testament to the rapid pace of quantum innovation. As we continue to explore their potential, these shimmering oscillations in the fabric of time may serve as a cornerstone for the next generation of quantum computers. In the ever-evolving narrative of quantum technology, time crystals represent both a scientific triumph and a beacon for what lies ahead—a fusion of curiosity, creativity, and the relentless pursuit of the unknown.Thank you for joining me for this week’s edition of The Lindahl Letter. Stay curious, and see you next week as we delve deeper into the quantum frontier.What’s next for The Lindahl Letter?* Week 177: The Attention Economy: Why Your Focus Is Under Siege* Week 178: Inside the Mind: The Science of Focus and Distraction* Week 179: Designed to Distract: How Technology Hijacks Your Attention* Week 180: The Focus Formula: Prioritize What Truly Matters* Week 181: Your Attention Fortress: Building a Distraction-Free LifeIf you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit nelslindahl.substack.com
Welcome back to another edition of The Lindahl Letter. It’s week 175, and we’re diving into the fascinating topic of universal quantum computation. This is an area where the boundaries of theory and practical application intersect, offering both incredible promise and immense challenges. If you’re tuning in to this podcast for the first time, welcome aboard. For regular readers and listeners, you already know this is a space where we examine complex topics with an eye on clarity and relevance.At its core, the concept of a universal quantum computer is as ambitious as it sounds. It’s the quantum computing equivalent of a general-purpose classical computer—think of it as a machine that can perform any quantum operation, given enough time and resources. The analogy to the classical Turing machine is apt, but the quantum realm is a different beast altogether. Where classical systems rely on bits flipping between 0 and 1, quantum systems leverage qubits, which exist in superpositions and can be entangled in ways that fundamentally alter how computations unfold.Achieving universality in quantum computation boils down to the idea that we can simulate any quantum process using a combination of quantum gates. These gates are the building blocks of quantum circuits, manipulating qubits in ways that enable properties like superposition, entanglement, and interference. In practice, a small set of gates—such as the CNOT gate combined with single-qubit operations like the Hadamard and Pauli gates—forms what’s known as a universal set. With these, any quantum operation can theoretically be approximated to arbitrary precision.Of course, theory and practice are rarely perfect companions. The current landscape of quantum computing is dominated by what’s known as Noisy Intermediate-Scale Quantum (NISQ) devices. These systems are powerful but imperfect, constrained by issues like qubit fidelity, error rates, and limited coherence times. The leap to universal quantum computation requires addressing two major challenges: error correction and scalability. Quantum error correction is a monumental task in itself, demanding additional qubits to safeguard against the natural noise and decoherence that plague quantum systems. Scalability, meanwhile, demands not just more qubits but better qubits—ones that can operate with higher fidelity and stronger connectivity.Despite these hurdles, progress is being made. Theoretical frameworks, like the Church-Turing-Deutsch principle, assert that any physical process can be simulated by a universal quantum computer. That idea has fueled decades of research and development. On the practical side, companies like IBM, Google, and IonQ are racing to push the limits of what quantum systems can achieve. IBM’s ambitious roadmap to a million-qubit machine is a bold declaration of intent, and the algorithms already developed for quantum systems—like Shor’s algorithm for factoring large numbers—hint at the transformative potential waiting to be unlocked.It’s easy to see why universal quantum computation captures the imagination. The implications stretch far beyond the confines of academia or industry, touching fields as diverse as cryptography, materials science, and optimization. Yet, the path forward is long and uncertain. It’s not a matter of if we get there but when—and how the journey reshapes the landscape of computing along the way.Thank you for taking the time to explore this frontier with me. If you’ve made it this far, I appreciate your curiosity and engagement. As always, stay curious, stay informed, and I’ll see you next week for another deep dive.What’s next for The Lindahl Letter?* Week 176: Quantum Computing and Advances in Time Crystals* Week 177: The Attention Economy: Why Your Focus Is Under Siege* Week 178: Inside the Mind: The Science of Focus and Distraction* Week 179: Designed to Distract: How Technology Grabs Your Attention* Week 180: The Focus Formula: Prioritize What Truly MattersIf you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit nelslindahl.substack.com
Quantum computing has long been hailed as a transformative technology with the potential to revolutionize fields such as cryptography, optimization, material science, and beyond [1]. However, quantum computing faces a fundamental challenge: the fragility of quantum states. Quantum bits, or qubits, are extraordinarily sensitive to errors caused by environmental noise, decoherence, and operational inaccuracies. Without robust error correction, this fragility undermines the reliability of quantum computations and makes it nearly impossible to scale quantum systems for practical use. Solving this problem is not just important—it is essential. Overcoming the challenge of error correction is the key to unlocking the transformative potential of quantum computing.The most cited relevant reference here has over 900 citations. It’s 46 pages and rather math heavy in parts.Gottesman, D. (2010, April). An introduction to quantum error correction and fault-tolerant quantum computation. In Quantum information science and its contributions to mathematics, Proceedings of Symposia in Applied Mathematics (Vol. 68, pp. 13-58). https://arxiv.org/pdf/0904.2557Historically, quantum error correction has been viewed as a critical but demanding overhead. Detecting and correcting errors in quantum systems requires an extraordinary number of physical qubits to encode logical qubits, with some estimates suggesting hundreds to thousands of physical qubits are needed for just one logical qubit. This sheer overhead has presented a formidable barrier to scaling quantum systems. Recent advances, however, are changing the narrative. The concept of error correction tolerant quantum computing represents a new paradigm: rather than simply adding layers of error correction, these systems aim to minimize the resources and performance penalties associated with error correction. They incorporate innovations in fault-tolerant architectures, error-resilient algorithms, and hardware designs that lower baseline error rates, making error correction more efficient and less resource-intensive.The significance of this shift cannot be overstated. Quantum computers operate using qubits that harness the principles of superposition and entanglement, which enable powerful computational possibilities but also make qubits susceptible to errors. Errors can take the form of bit flips, phase flips, or decoherence, any of which can disrupt calculations. Without a solution to these challenges, quantum computing will remain a theoretical possibility rather than a practical tool. Error correction tolerance offers a pathway forward, reducing the burden on physical qubits and accelerating the timeline for practical quantum systems.The promise of error correction tolerant quantum computing lies in its ability to make quantum computing scalable, efficient, and cost-effective. With reduced error correction overhead, more logical qubits can be supported without requiring exponential increases in physical qubits. This enhances scalability while making quantum systems more efficient and affordable for research and industrial applications. Furthermore, error correction tolerance paves the way for faster execution of quantum algorithms, ensuring that quantum computers are not only reliable but also competitive with classical systems in terms of speed.Major players in the quantum space, including IBM, Google, and Rigetti, are actively pursuing this critical area of research. Recent breakthroughs include adaptive error correction that dynamically adjusts protocols to system performance, noise-aware algorithms that tolerate specific noise patterns, and hybrid quantum-classical approaches that use classical computation to support quantum error correction. These developments demonstrate both the complexity of the problem and the progress being made to address it. Looking ahead, future directions will likely include the integration of machine learning techniques to optimize error correction strategies and the development of materials and designs that are inherently resistant to errors.Ultimately, solving the challenge of error correction is essential for quantum computing to achieve its full potential. Without it, the field will remain limited to small-scale, experimental systems. With it, quantum computing can scale to tackle some of the most complex problems in science, industry, and beyond. Error correction tolerance represents a critical step toward this future, making the dream of practical quantum computing not just possible, but inevitable.Footnotes:[1] https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Error+correction+tolerant+quantum+computing&btnG=What’s next for The Lindahl Letter?* Week 175: universal quantum computation* Week 176: Quantum Computing and Advances in Time Crystals* Week 177: The Attention Economy: Why Your Focus Is Under Siege* Week 178: Inside the Mind: The Science of Focus and Distraction* Week 179: Designed to Distract: How Technology Hijacks Your AttentionIf you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit nelslindahl.substack.com
We are going to spend some time digging into quantum computing over the next few weeks. Things are starting to move forward in that space which is exciting [1]. Let’s not waste a second and just go ahead and jump right into the deep end of this magical quantum puzzle. Here we go!Nondeterministic gates present a fascinating challenge within the evolving landscape of quantum computing. At their core, these gates function probabilistically, meaning their outcomes are not guaranteed in the deterministic sense familiar to classical computation. This intrinsic uncertainty aligns with the broader principles of quantum mechanics but complicates the goal of building reliable and scalable quantum systems. Understanding how to integrate nondeterministic gates into fault-tolerant architectures is an essential step in moving quantum computing from the lab to practical applications. On a side note we may very well dig into the brilliantly intriguing world of creating time crystals again soon during week 176 where some of the ambiguity of being probabilistic disappears.Fault-tolerant quantum computation relies on carefully crafted error-correction techniques to manage the delicate states of qubits, which are highly susceptible to noise and decoherence. The introduction of nondeterministic gates adds another layer of complexity to this already intricate problem. These gates often succeed probabilistically, necessitating either multiple attempts or supplementary operations to ensure the desired outcome. While this characteristic can simplify certain hardware requirements—especially in photonic systems where nondeterministic interactions are a natural fit—it also demands more sophisticated error management strategies to maintain computational fidelity.The key to making nondeterministic gates viable lies in adaptive computation strategies. Measurement-based quantum computing (MBQC) exemplifies this approach, using entangled resource states and measurements to drive computation. In MBQC, the probabilistic nature of certain operations is counterbalanced by flexible correction protocols, which adjust subsequent steps based on observed outcomes. It’s basically overhead from error checking and dropping the results of failed gates. This adaptability creates a robust framework for handling nondeterminism but comes at the cost of increased resource requirements, including additional qubits and computational overhead. Balancing these trade-offs is critical for the success of practical quantum systems.Nondeterministic gates challenge the quantum community to rethink what fault tolerance means in this new paradigm. Traditional error-correction methods like the surface code were designed with deterministic operations in mind, and they must evolve to address the probabilistic errors introduced by these gates. This evolution involves tighter integration of classical and quantum systems, allowing for real-time error detection and response. It also calls for a deeper understanding of how to optimize quantum resources to handle the additional uncertainty without sacrificing scalability.Here are three articles to check out:Li, Y., Barrett, S. D., Stace, T. M., & Benjamin, S. C. (2010). Fault tolerant quantum computation with nondeterministic gates. Physical review letters, 105(25), 250502. https://arxiv.org/pdf/1008.1369Kieling, K., Rudolph, T., & Eisert, J. (2007). Percolation, renormalization, and quantum computing with nondeterministic gates. Physical Review Letters, 99(13), 130501. https://arxiv.org/pdf/quant-ph/0611140Nielsen, M. A., & Dawson, C. M. (2005). Fault-tolerant quantum computation with cluster states. Physical Review A—Atomic, Molecular, and Optical Physics, 71(4), 042323. https://arxiv.org/pdf/quant-ph/0405134Footnotes:[1] https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Nondeterministic+gates+tolerant+quantum+computation&btnG=What’s next for The Lindahl Letter?* Week 174: error correction tolerant quantum computing* Week 175: universal quantum computation* Week 176: Quantum Computing and Advances in Time Crystals* Week 177: The Attention Economy: Why Your Focus Is Under Siege* Week 178: Inside the Mind: The Science of Focus and DistractionIf you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit nelslindahl.substack.com
Transfer learning has proven to be an invaluable tool in machine learning, enabling us to take advantage of pre-trained models to boost performance on new tasks, even with limited data. Instead of training a model from scratch, we can repurpose one trained on a large, diverse dataset to extract features—essential characteristics of the data—that are often applicable across various problems. For instance, in image recognition, these features might be edges, textures, or patterns that the model has learned to detect. Transfer learning allows us to reuse these learned features and apply them to new tasks, saving both time and computational resources.The key idea behind transfer learning for features is that many of the low-level features learned by a model are transferable to new domains. Models like ResNet in computer vision or BERT in natural language processing learn generalizable features from large datasets, which can be applied to a variety of new tasks. By transferring the feature extraction layers from these models, we can fine-tune them for specific tasks with far less data. This significantly reduces the amount of time and effort needed to train a model, since the lower-level features have already been learned, allowing us to focus on task-specific learning.Take medical imaging, for example. A model trained on a vast dataset of general images can be fine-tuned for tasks like detecting tumors in X-rays or MRIs by leveraging the features it already knows how to extract. Similarly, in natural language processing, models like GPT or BERT can be adapted to perform sentiment analysis or text classification tasks with minimal additional data. In voice recognition, a pre-trained model could be adapted to identify speakers or recognize commands in a noisy environment, utilizing previously learned features from a broader speech dataset.While transfer learning offers numerous benefits, it’s not without its challenges. One potential issue is domain shift, where the source and target datasets are too dissimilar, making the transferred features less useful. Fine-tuning is often required to ensure the model performs well on the new task, and this can be tricky if the new data is too sparse. Additionally, there’s the risk of overfitting when working with limited data, which could compromise the model’s generalization ability. Despite these hurdles, transfer learning remains a powerful tool, allowing us to adapt pre-trained models to new challenges quickly and efficiently.Looking ahead, the growing availability of pre-trained models and powerful transfer learning techniques is likely to drive even more innovations in fields like healthcare, finance, and beyond. As the models become more specialized and the datasets even larger, the opportunities for transfer learning will expand, enabling more complex tasks to be tackled with fewer resources. By enabling machines to generalize features across tasks, transfer learning is not only enhancing efficiency but also making machine learning more accessible to a wider range of applications, from startup projects to large-scale enterprise solutions.Things to consider this week:Footnotes:[1] What’s next for The Lindahl Letter? * Week 173: nondeterministic gates tolerant quantum computation* Week 174: error correction tolerant quantum computing* Week 175: universal quantum computation* Week 176: resilient quantum computation* Week 177: quantum computation with higher dimensional systemsIf you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit nelslindahl.substack.com
As machine learning evolves, traditional approaches to feature engineering are being transformed by the power of graph data structures. Graphs—representing entities as nodes and relationships as edges—provide a rich framework to model complex, non-linear connections that go beyond what’s possible with tabular data. It’s an area of focus I keep going back to better represent knowledge. By embracing graph-based feature engineering, we can uncover deeper insights and create more effective predictive models. I spent some time looking around Google Scholar results trying to find a really interesting deep dive on this subject and was somewhat disappointed [1].Graphs are highly versatile and have applications in diverse domains. In social networks, for example, users (nodes) interact through actions like likes, shares, or friendships (edges). Graph-based features such as centrality measures can reveal influential users or detect communities. In e-commerce, graphs model user-product interactions, capturing relationships that enhance recommendation systems. For instance, understanding the co-purchase network helps predict new product recommendations. Similarly, in bioinformatics, graphs representing protein-protein interactions or gene relationships enable predictions about biological functions or disease pathways. Knowledge graphs, which structure information in interconnected formats, help machines reason over relationships, such as identifying entity connections for natural language processing tasks.To leverage the full potential of graphs, several advanced techniques are employed. Centrality measures, for instance, quantify the importance of nodes in a graph. Degree centrality counts direct connections, while betweenness centrality identifies nodes bridging clusters. These measures are critical for tasks like identifying influencers or analyzing communication networks. Graph embeddings, such as Node2Vec or DeepWalk, map graph structures into continuous vector spaces, making them compatible with machine learning models [2][3]. Additionally, Graph Neural Networks (GNNs), like Graph Convolutional Networks (GCNs), aggregate information from neighboring nodes. These networks excel in tasks such as node classification, where labels are assigned to nodes (e.g., identifying spam accounts), and link prediction, which predicts relationships between nodes, such as friendships in social networks.Despite their advantages, graph-based feature engineering comes with challenges. Large-scale graphs can be resource-intensive, requiring efficient algorithms like graph sampling or distributed computing frameworks to manage their computational costs. Sparse graphs with limited connections can also hinder meaningful feature extraction, making advanced techniques like graph regularization essential. Addressing these challenges is critical to fully harness the potential of graph-based methods and create robust machine learning models.Graph-based feature engineering is revolutionizing machine learning by enabling us to capture relationships and dependencies within data. From refining recommendation systems to advancing healthcare predictions, graph-based approaches pave the way for deeper, more accurate insights in an interconnected world. As machine learning continues to evolve, the potential of graph-based methods will only grow, offering exciting opportunities for innovation. I’m ultimately interested in how knowledge ends up getting stored and represented moving forward within the context of exceedingly large language models. Footnotes:[1] https://scholar.google.com/scholar?hl=en&as_sdt=0%2C6&q=Graph-Based+Feature+Engineering&btnG= [2] https://arxiv.org/pdf/1607.00653[3] https://arxiv.org/pdf/1609.02907 What’s next for The Lindahl Letter? * Week 172: Transfer Learning for Features* Week 173: nondeterministic gates tolerant quantum computation* Week 174: error correction tolerant quantum computing* Week 175: universal quantum computation* Week 176: resilient quantum computationIf you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit nelslindahl.substack.com
Thank you for tuning in to this audio only podcast presentation. This is week 170 of The Lindahl Letter publication. A new edition arrives hopefully every Friday. This week the topic under consideration for The Lindahl Letter is, “Are 8K Blu-ray a thing?”The short answer is no—there’s currently no official 8K Blu-ray format on the market. The highest-resolution Blu-ray available right now is 4K Ultra HD. Despite the emergence of 8K TVs, the development of an 8K physical media standard has been slow to nonexistent. You can generate content at 8K using 65/70mm IMAX film scans or recording the content using native 8K RED cameras. But there’s more to this story, because what’s holding back 8K Blu-ray isn’t just a lack of demand for higher resolutions. It’s about the larger shift in how we consume media, the infrastructure needed to support it, and even questions of accessibility and ownership.The dominance of streaming has completely changed the landscape of home entertainment. Most people today reach for their remote or phone, pull up a streaming app, and press play, accessing a vast library of content without needing physical discs. And while it’s convenient, streaming isn’t the perfect solution for everyone, and it raises some interesting challenges for high-resolution content. The reality is, even today, reliable 4K streaming requires a fast and stable internet connection—something many regions in the world, including parts of the United States, still struggle with.For people in areas with slower or less reliable internet, streaming high-definition content, let alone 4K or 8K, isn’t an option. This digital divide is often overlooked in the rush to adopt the newest formats and streaming platforms. A physical 8K Blu-ray option, although niche, would offer these users a way to access ultra-high-definition content without relying on the vagaries of internet service. Physical media doesn’t buffer or depend on bandwidth. It’s a permanent, reliable way to enjoy high-quality media.Another issue that streaming raises is the matter of ownership. When you buy a Blu-ray disc, you own a copy of that film or show—something tangible that you can keep, loan, or sell. With streaming, you’re essentially renting access to content. Licensing agreements and platform decisions dictate what’s available, and content can disappear from a service overnight due to contract disputes or shifting corporate strategies. Even if you purchase a digital copy, the platform still controls your access to it, and it could be removed or rendered inaccessible if the platform decides to remove it or goes under. We’ve already seen titles vanish from digital libraries, leaving consumers who thought they “owned” these digital copies with no recourse.For film enthusiasts, collectors, or anyone who values the security of owning their media outright, physical Blu-rays still hold a lot of appeal. An 8K Blu-ray, in particular, would give these users a chance to own ultra-high-definition content at its absolute best quality. Streaming platforms, while convenient, can’t match the fidelity of a physical disc, especially when it comes to uncompressed audio and video quality. And for those who value the archival aspect of physical media, 8K Blu-ray would represent a way to preserve the best possible version of their favorite films and shows.Yet, despite these potential advantages, the market for physical media has become niche. Blu-ray players are harder to come by, with fewer manufacturers making them each year, and studios are releasing fewer physical editions. Streaming is simply more profitable and cost-effective for companies, and it aligns with current consumer habits. There’s also the fact that creating a new standard for 8K Blu-ray would require a significant investment in technology, from new players to new discs, and that investment likely wouldn’t be recouped given current market conditions.In the meantime, tech companies are focusing on improving streaming infrastructure to support 8K content. Compression algorithms are advancing, and AI-powered upscaling technologies are making it possible for 4K content to look sharper on 8K screens, even if it’s not natively 8K. This makes it unlikely that we’ll see a mass-market push for 8K Blu-ray anytime soon [1][2]. It’s possible that high-quality 8K streaming will fill that void, but it’s a solution that still doesn’t serve everyone equally.So, are 8K Blu-rays a thing? Not at this point, and they may never become mainstream. But as we move toward a fully digital media landscape, we should keep in mind what’s lost when physical formats disappear: ownership, access for all, and the assurance that our favorite content won’t vanish overnight.Thank you for joining me for this week’s discussion. Until next week, let’s keep asking what the future of media really means for us all.Things to consider this week:“Monster 4,400-qubit quantum processor is '25,000 times faster' than its predecessor”https://www.livescience.com/technology/computing/monster-4-400-qubit-quantum-processor-is-25-000-times-faster-than-its-predecessor TechCrunch: “Microsoft and Atom Computing will launch a commercial quantum computer in 2025” https://techcrunch.com/2024/11/19/microsoft-and-atom-computing-will-launch-a-commercial-quantum-computer-in-2025/ “Physicists Transformed a Quantum Computer Into a Time Crystal” https://www.sciencealert.com/physicists-transformed-a-quantum-computer-into-a-time-crystalFootnotes:[1] https://www.homecinemachoice.com/content/no-8k-upgrade-blu-ray-admits-8k-association [2] https://8kassociation.com/If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit nelslindahl.substack.com
Thank you for tuning in to this audio only podcast presentation. Here we are, at week 169 of The Lindahl Letter, reflecting on a different kind of science—one rooted in a boldness that prioritized discovery over deadlines, adventure over immediate outcomes. This week’s topic, “Pure science from the last millennium,” brings us to a fundamental question: What do we want scientific investment to be going forward?Decades ago, space exploration wasn’t about quarterly returns or brand endorsements. It was the “final frontier”—a true and meaningful challenge that demanded our collective curiosity and belief in something larger than ourselves. When we look back at Voyager 1, launched in September of 1977 and still sending back data from beyond the solar system, we’re reminded of an era when we were willing to invest in the unknown and pure science [1]. We, the taxpayers, poured resources into pure science, trusting that whatever Voyager discovered would expand our horizons, even if it took generations. Those investments bank intergenerational equality in ways that pay forward with unlocked potential. That previous era of government driven budgets is evolving. Today, we’re witnessing a new chapter in space exploration, one driven not only by government agencies but by private space companies backed by some very rich individuals. Companies like SpaceX, Blue Origin, and others are racing to develop technologies that can propel humanity forward, not just in the pursuit of knowledge but with a practical eye on commercial possibilities. These companies have reignited public interest in space exploration, capturing imaginations with promises of lunar bases, Mars colonies, and low-cost satellites. But they bring a shift in perspective too—a focus on efficiency, profitability, and measurable results.Private companies are undeniably accelerating technological progress. They’re launching rockets at a pace governments could never match and making space travel more accessible. In many ways, they’re pushing us into the future faster than traditional models of science funding would allow. But this pace has implications: private companies often operate on a very different timeline and set of incentives than the public missions of the past. The long, open-ended pure science based inquiries that characterized projects like Voyager or Hubble might not fit as seamlessly into the bottom-line-driven model of private enterprise.What does this mean for the future of pure science? There’s a risk that in our rush to commercialize space, we could lose sight of the kind of exploration that doesn’t pay off right away, the kind that asks questions not because they’re immediately useful but because they might change everything someday. Voyager, Hubble, and the Mars rovers were funded with a faith that curiosity itself was valuable. They didn’t need to deliver a profit; they only needed to expand our knowledge.Investment in pure science has, over the years, shifted in response to economic pressures, political priorities, and the rise of private industry. In the mid-20th century, there was a golden age of public funding for fundamental research, driven by a sense of national pride and urgency, especially during the Space Race. Governments around the world poured money into science for the sake of knowledge itself—driven by the belief that scientific exploration, even with uncertain outcomes, would ultimately benefit society. This mindset fueled projects like the Apollo missions, the Voyager probes, and the Hubble Space Telescope, all examples of pure scientific research where the primary goal was exploration, not commercial gain.But over the last few decades, the focus has gradually shifted toward more immediate, application-driven science. Public budgets have tightened, and government funding has increasingly emphasized practical and commercial outcomes. Today, many funding bodies expect quick, measurable results—preferably ones that contribute to the economy, healthcare, or national security. This shift means that pure scientific research, with its inherently uncertain timeline and lack of immediate commercial payoff, often struggles to secure the same level of investment it once enjoyed.Thank you for joining me this week, and here’s to staying curious, even in a world that asks us to measure every journey in miles and profits.Things to consider this week:TechCrunch: “Bluesky raises $15M Series A, plans to launch subscriptions”https://techcrunch.com/2024/10/24/bluesky-raises-15m-series-a-plans-to-launch-subscriptions/Reuters: “New Nvidia AI chips overheating in servers, the Information reports”https://finance.yahoo.com/news/nvidia-ai-chips-face-issue-141200900.html [Must watch] Gary Marcus: OpenAI could be the next WeWork https://www.foxbusiness.com/video/6364719527112Footnotes:[1] https://www.cnn.com/2024/11/01/science/voyager-1-transmitter-issue/index.html If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit nelslindahl.substack.com
We all thought you would be able to easily ask the house to turn on all the lights or command your television by voice to do things by walking into the room. Some of us thought the entire wall would be a television screen by now and that has not happened either. However, some of the new 100 inch TVs on the market are really large. Enabling automated actions is what is happening within the latest development kits related to the companies making and contributing to LLMs. We have seen Google teams introduce low stakes automated actions like making dinner reservations or screening calls. Those types of training activities help them build and reinforce automations without really doing anything particularly risky. It’s all about the ecosystems and when Google teams start to really deeply allow an assistant based agent to do things that are deeply integrated at that point things are going to rapidly change. That is when your agent will be empowered to the point of being able to really automate some things that will be impactful. Sam Altman of OpenAI has said that 2025 will be the year that we will see agents working effectively [1]. I’m guessing that Sam has spent some time thinking deeply about what these agents are going to be capable of doing. We will probably start to see Google calendar automations where meetings with unfulfilled action items automatically get scheduled or task follow ups by chat can come from the agent. This type of recursive review of things that happened where a transcript is recorded and checked against a project plan or calendar is certainly on the roadmap. It’s going to be about bringing the next set of low stakes actions to the business world and calling it revolutionary. A lot of hype is going to occur. Sure systems with robotic process automation or coded workflows have been able to automate things for people willing to invest in those automations. With the advent of agents that are able to schedule automated actions it changes the barrier to entry by fundamentally lowering it. People are probably going to be more willing to trust one of the known major brands with this technology considering that most smartphones have banking information saved and are logged into a myriad of other consequential accounts. Having practical limits on what agents are able to enable in terms of automation remains probably the most important process enablement gate to be considered. Apparently the teams at Google don’t expect to deploy any useful agents until 2025 at the earliest [2].The push toward true agent-based automation is an ongoing journey. While current tools may seem like small, incremental advances—like handling calendar follow-ups or screening calls—they represent foundational steps toward a more integrated, intuitive digital ecosystem. As AI agents begin to bridge the gap between simple command-driven functions and context-aware actions, we’re stepping into an era where automation isn't just a convenience but a fundamental part of daily life. This gradual transformation will bring more impactful applications, positioning agents not as isolated tools but as active partners in our productivity.Footnotes:[1] https://www.tomsguide.com/ai/chatgpt/the-agents-are-coming-openai-confirms-ai-will-work-without-humans-in-2025 [2] https://techcrunch.com/2024/10/29/google-says-its-next-gen-ai-agents-wont-launch-until-2025-at-the-earliest/ If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit nelslindahl.substack.com
Discover new partners and
collaboration opportunities —right in your inbox.
Get notified about new partnerships