The Daily Qubit

A trifecta of quantum diffusion models are used to generate synthetic data for few-shot learning accuracy, QML models are put to the test of classifying mobile traffic data, QGNNs are used to improve supervised learning on relational data, and more.

Welcome to The Daily Qubit!

Get the latest in top quantum news and research Monday through Friday, summarized for quick reading so you stay informed without missing a qubit.

Have questions, feedback, or ideas? Fill out the survey at the end of the issue or email me directly at [email protected].

And remember—friends don’t let friends miss out on the quantum era. If you enjoy The Daily Qubit, pass it along to others who’d appreciate it too.

Happy reading and onward!

Cierra

Today’s issue includes:

  • Several quantum diffusion models are used to generate synthetic data in order to improve few-shot learning accuracy of quantum neural networks.

  • Classical and quantum models are compared and contrasted in classifying mobile traffic data.

  • Relational data is mapped onto QGNNs to make machine learning on multi-table datasets more efficient.

QUANTUM APPLICATION HEADLINES

One of the profound strengths of human intelligence is our capacity to draw insights from limited information, to make inferences where data is scarce and patterns are faint. In machine learning, especially in deep learning, very large annotated datasets are typically required to train models capable of accurate classification. Yet, there are countless scenarios where such datasets are out of reach, due to complexity or rarity. The concept of few-shot learning is a solution—a method that allows models to perform accurate classifications with only a handful of examples, mirroring an essential, often-overlooked trait of human cognition.

Image: by Midjourney for The Daily Qubit

APPLICATION: Scientists from Mitsubishi Electric Research Laboratories introduce quantum diffusion models for addressing challenges in quantum few-shot learning. Three QDM-based methods are proposed to improve the performance of quantum neural networks in tasks with limited training data.

SIGNIFICANCE: Few-shot learning is essential in cases where large datasets are unavailable, yet traditional QML methods often fall short due to noise and computational limits. QDMs may be used for generating synthetic data, leading to improvement in the accuracy of QNN models in low-data scenarios. These methods could advance practical applications of QML in areas with limited, high-stakes data, such as healthcare diagnostics and financial modeling.

HOW: The study implements QDMs to augment QNN training by generating synthetic data. Three approaches are explored: QDiff-LGGI (Label-Guided Generation Inference) expands datasets via QDM-generated samples, QDiff-LGNAI (Label-Guided Noise Addition Inference) leverages label guidance during noise addition for accurate inference, and QDiff-LGDI (Label-Guided Denoising Inference) applies label-guided denoising for refined noise reduction.The research achieved notable results with its three QDiff-based algorithms, outperforming traditional QML methods in few-shot learning tasks. Specifically, QDiff-LGNAI reached 97.8% accuracy in 2-way, 1-shot learning on the Digits MNIST dataset.

BY THE NUMBERS:

  • 3 - The number of algorithms developed for enhancing QNNs in few-shot learning: QDiff-LGGI, QDiff-LGNAI, QDiff-LGDI.

  • 3 - Also the number of datasets tested: Digits MNIST, MNIST, and Fashion MNIST for evaluating the models.

  • 97.8% - Accuracy achieved by QDiff-LGNAI in 2-way 1-shot learning on the Digits MNIST dataset, surpassing baseline QML methods.

The traffic of the digital world is made up of networks packed with a constant flow of data from countless sources, each vying for attention. As mobile devices grow more integral to our lives, the challenge of efficiently managing this digital traffic becomes relevant. With an endless stream of app data—from video calls to chat messages—accurately identifying and classifying these data flows affects maintenance of network performance and security. Yet, as traffic volume and complexity grow, so do the computational demands.

Image: by Midjourney for The Daily Qubit

APPLICATION: Researchers from the University of Napoli Federico II and the University of Verona compare the effectiveness of classical machine learning and quantum machine learning approaches for mobile app traffic classification. It assesses whether quantum models, specifically quantum neural networks, can handle the complexity and scalability needed for real-time classification in mobile network traffic, using the MIRAGE-COVID-CCMA-2022 dataset for experiments.

SIGNIFICANCE: As mobile networks grow in complexity, classifying traffic from different apps (e.g., chat, video, and audio applications) becomes essential for managing network resources and security. Quantum models may be able to process data quickly and improve mobile traffic classification, regardless of evolving traffic patterns and computational demands.

HOW: The researchers trained and compared classical models (specifically, multi-layer perceptron and 1D convolutional neural networks) and quantum neural networks to classify mobile app traffic. Since the QML models encode classical data into quantum states, the research suggests that they may allow for richer representations of complex patterns. Testing involved evaluating accuracy, precision, recall, and F1 scores across multiple runs. Overall, classical 1D-CNN outperformed quantum models in app classification tasks, achieving 91.01% accuracy compared to the best quantum model at 85%. While the advantage is not yet there, this is one to be revisited with maturing technology as the challenge with real-time, complex data remains.

BY THE NUMBERS:

  • 85.00% – Highest accuracy achieved by the quantum model in app classification.

  • 91.01% – Highest accuracy achieved by the 1D-CNN model in app classification.

  • 5 – Total mobile apps (Discord, Messenger, Skype, Teams, and Zoom) classified in the study.

  • 3 – Different activities classified (audiocall, chat, and videocall).

Graph neural networks have been an integral addition to machine learning as they allow complex, structured data to be processed in graph form—relevant for applications like social network analysis, recommendation systems, and fraud detection. However, traditional GNNs can struggle with massive, multi-table relational databases due to their computational demands. When it comes to quantum technologies, quantum graph neural networks, which leverage quantum principles like superposition to handle relational data more efficiently, may eventually process large-scale, complex datasets with greater speed and scalability.

Image: by Midjourney for The Daily Qubit

APPLICATION: A team from the University of Maribor and the Universität zu Lübeck explores supervised learning on relational databases using quantum graph neural networks. They propose a framework where relational database structures are mapped into graphs and processed by QGNNs, seeking to streamline complex machine learning tasks directly on relational data.

SIGNIFICANCE: Traditional machine learning struggles with relational databases due to their multi-table structure. QGNNs may be able to more efficiently handle relational data in graph form, using quantum computing principles like superposition to increase efficiency. If successful, QGNNs could upend industrial data analytics by handling vast, multi-terabyte databases with millions of nodes, supporting tasks that are computationally intense with classical systems. This has relevance for data-driven tasks like customer segmentation, fraud detection, and material discovery in sectors with vast, complex datasets.

HOW: The authors outline a QGNN framework that replaces classical graph neural network steps with quantum operations. Key components include encoding relational data into quantum-compatible graph structures and using variational quantum circuits to learn from these graphs. Hybrid models combining classical and quantum layers are also explored, optimizing efficiency on current quantum hardware while preserving relational data complexity. However, it’s important to note that challenges in quantum encoding, circuit optimization, and current hardware limitations need further research to fully realize QGNNs for industrial-scale applications​.

BY THE NUMBERS:

  • 22% – Portion of data-driven insights used by industry decision-makers, highlighting the need for more efficient, actionable machine learning on relational data.

  • 10s of Terabytes – Typical size of relational datasets targeted for QGNN applications, demonstrating the framework's industrial-scale potential.

  • Millions – Estimated number of nodes in complex relational graphs derived from databases.

RESEARCH HIGHLIGHTS

⏩️ Phasecraft Ltd. led a study that demonstrates that a low-depth quantum approximate optimization algorithm can solve certain symmetric and near-symmetric optimization problems exponentially faster than classical methods, specifically on instances where classical algorithms would typically require exponential time.

Researchers from the Zhejiang Key Laboratory of Biomedical Intelligent Computing Technology and the PVP Siddhartha Institute of Technology introduce a quantum intrusion detection system that uses quantum machine learning and outlier analysis to detect Distributed Denial-of-Service (DDoS) attacks. By using quantum neural networks and unique encoding methods, the model achieves a high detection accuracy of 99.87%.

NEWS QUICK BYTES

💻️ IBM introduces "fractional gates" on its Quantum Heron QPUs, designed to reduce circuit depth and improve efficiency for utility-scale quantum workloads, especially in simulations of natural systems. These new gates replace traditional pulse-level control, offering single- and two-qubit rotations that cut down gate counts, shorten execution times, and optimize performance on large quantum circuits.

🐇 Nu Quantum has adopted CERN's White Rabbit technology to enable precise timing synchronization essential for scaling quantum computing networks across data centers. Previewing their quantum networking unit at the National Quantum Technology Showcase, Nu Quantum is working towards a modular approach that interlinks quantum nodes into a distributed network, eventually leading to scalable, data-center-ready quantum systems with WR providing sub-nanosecond synchronization for entanglement and control.

⚡️ IonQ has partnered with imec to develop photonic integrated circuits (PICs) and chip-scale ion traps, aiming to miniaturize and improve the efficiency of its trapped-ion quantum computing systems. This collaboration seeks to replace traditional bulk optical components with integrated photonic devices, enhancing qubit density, reducing system size and cost, and advancing IonQ’s capabilities toward scalable, enterprise-grade quantum systems.

💡 IonQ has partnered with NKT Photonics to develop next-generation laser systems for its trapped-ion quantum computers, including the upcoming IonQ Tempo and barium-based systems. This collaboration supports IonQ’s goal of providing scalable, data center-ready quantum systems, while using NKT’s fiber laser technology to deliver modular, high-performance, and cost-effective solutions.

IN CASE YOU MISSED IT:

QUANTUM MEDIA

LISTEN

In the most recent episode of the 632-nanometer podcast, Peter Zoller and Ignacio Cirac discuss the evolution of quantum computing, from early developments in trapped ion technology to challenges in error correction and fault tolerance. They share insights into the commercialization of quantum computing, its applications, and offer guidance for young scientists entering the field.

WATCH

Quantum and AI talks on current research at TU Delft:

THAT’S A WRAP.