Welcome to the Q4C (QForce) Series

Queens For Computing
Queens College CUNY Computer Science Colloquium


This colloquium is intended to bring together Computer Science and Data Science researchers in the tri-state area (especially in NYC) and to foster collaboration. We welcome talks on any topic of interest to the CS community, including theory, algorithms, machine learning, and data science. If you are interested in attending in-person or online, or would like to give a talk, please contact the organizers.


  1. Monday, 02/09/2026, 12:15PM - 1:30PM
    Room: Science Building, C205
    Speaker: Daniel Waxman, Basis AI

    Title: Online Bayesian Learning and Ensembles

    Abstract: Many real-world applications of machine learning require continuous, adaptive learning strategies over the course of deployment. We discuss a unified framework for online and sequential inference and ensembling of Bayesian models. We give particular focus to Gaussian processes, a family of flexible non-parametric models, and show how to construct general streaming estimators, and further show how they can be adapted to decentralized federated and robust learning. We finally discuss the fragility of the typical online ensembling method, Bayesian model averaging, and introduce a principled alternative from optimization theory, online Bayesian stacking.

  2. Monday, 03/02/2026, 12:15PM - 1:30PM
    Room: Science Building, C205
    Speaker: Yingcong Li, New Jersey Institute of Technology

    Title: Understanding Memory and Reasoning in Language Models via Markov Processes

    Abstract: Despite their remarkable empirical success, language models remain poorly understood from a principled machine learning perspective. To this end, in this talk, we present a unified Markovian perspective on memory and reasoning in modernĀ language models. We show that the attention mechanism can be formally interpreted as a context-conditioned Markov process, enabling a principled analysis of learning dynamics. Under this view, model memory corresponds to a Markov transition matrix, and incorporating new knowledge can be understood as expanding the state space. This perspective motivates embedding-level update methods for continual learning that achieve sample-efficient knowledge integration with zero catastrophic forgetting. Furthermore, by formulating multi-step reasoning as a Markov process, we analyze reasoning in small language models and explain why standard supervised fine-tuning and reinforcement learning can fail under sparse rewards. Together, these results demonstrate that Markov processes provide a unifying lens for understanding and improving memory, reasoning, and learning in language models.

  3. Wednesday, 03/25/2026, 12:15PM - 1:30PM
    Room: Science Building, C205
    Speaker: Sai Zhang, NYU

    Title: Leveraging Natural Human Behavior for Efficient and Intelligent AR and VR Systems

    Abstract: Augmented and virtual reality (AR/VR) systems are emerging as a critical computing platform in modern life, with increasing impact across fields like education, healthcare and industrial applications. Despite their growing importance, AR and VR devices operate under strict constraints on latency, energy consumption, and computational resources, making efficient system design a fundamental challenge. A defining characteristic that distinguishes AR and VR from conventional edge devices is their direct and continuous interface with the human user, where perception, attention, and intention fundamentally shape system behavior. By leveraging natural human behavior such as gaze, head motion, and hand interaction as first class signals, AR and VR systems can adapt their computation to what truly matters to the user, enabling selective processing and more efficient use of limited resources.

    In this talk, I will present recent progress from my group on efficient AR and VR computing, spanning a broad range of applications including AI, graphics, and tracking, as well as the corresponding hardware and system designs that enable these solutions to be implemented efficiently. Together, these efforts illustrate how human centered system design can unlock new opportunities for building efficient, intelligent, and responsive AR and VR platforms.

  4. Wednesday, 04/15/2026, 12:15PM - 1:30PM
    Room: Science Building, C205
    Speaker: David Persson, Flatiron & NYU

    Title: The Polar Express: Optimal matrix sign methods and their application to the Muon algorithm

    Abstract: Computing the polar decomposition and the related matrix sign function has been a well-studied problem in numerical analysis for decades. Recently, it has emerged as an important subroutine within the Muon algorithm for training deep neural networks. However, the requirements of this application differ sharply from classical settings: deep learning demands GPU-friendly algorithms that prioritize high throughput over high precision. We introduce Polar Express, a new method for computing the polar decomposition. Like Newton-Schulz and other classical polynomial methods, our approach uses only matrix-matrix multiplications, making it very efficient on GPUs. Inspired by earlier work of Chen & Chow and Nakatsukasa & Freund, Polar Express adapts the update rule at each iteration by solving a minimax optimization problem. We prove that this strategy minimizes error in a worst-case sense, allowing Polar Express to converge as rapidly as possible both in the early iterations and asymptotically. We also address finite-precision issues, making it practical to use in bfloat16. When integrated into the Muon training framework, our method leads to consistent improvements in validation loss when training a GPT-2 model on one billion tokens from the FineWeb dataset, outperforming recent alternatives across a range of learning rates.

  5. Monday, 04/27/2026, 12:15PM - 1:30PM
    Room: Science Building, C205
    Speaker: Lu Wang, Steven Institute of Technology

    Title: Informal Caregivers' Mental Models of Generative Artificial Intelligence-Based Conversational Agents for Problem-Solving

    Abstract: People increasingly use artificial intelligence (AI), including Generative AI-based Conversational Agents (GCAs), to solve daily problems. However, users face challenges in understanding GCA's capabilities, learning to interact with it, and evaluating its outputs. Designing around mental models of GCA could help address such challenges. We interviewed sixteen informal caregivers of family and friends and asked them to use GCAs on various problem-solving tasks. We identified mental models regarding GCA as (1) a search engine, (2) a generative AI tool, (3) a personal assistant, and (4) a conversational partner. Notably, some participants shifted and modified their mental models of GCAs even during the interactions. We presented how the differences in mental models could influence participants' evaluations of GCAs' performance. Our findings contribute to understanding the applications of GCA for daily problem-solving and reveal the design tensions introduced by different mental models. We discuss the implications for the safe and effective use of GCAs.






The seminar is organized by Jun Li
Email Contact: jun.li@qc.cuny.edu