AI researcher focused on trustworthy and safety-aligned agentic systems, with peer-reviewed publications spanning AI architectures, governance, and real-world deployment contexts. Experienced in designing constrained agent pipelines, analyzing failure modes, and studying robustness in high-stakes environments. Preparing for graduate research in reliable machine learning systems.
My research explores the design of trustworthy agentic AI systems capable of autonomously managing complex digital infrastructure while maintaining transparency, accountability, and governance. I investigate architectures combining agent-based intelligence, blockchain monitoring, and digital twins to enable secure and explainable AI-driven decision-making — particularly in critical domains such as smart cities, healthcare, and renewable energy infrastructure.
A central concern of my work is the alignment problem in deployed agentic systems: how can we build autonomous pipelines that remain safe and governable when exposed to adversarial inputs, distributional shift, or misaligned incentives? I approach this through the intersection of safety engineering, formal governance frameworks, and empirical failure-mode analysis.
Jan, S., Akarma, A., et al.
Toqeer Ali Syed, Akarma, A., et al.
Toqeer Ali Syed, Akarma, A., et al.