Keywords

Large Language Models, LLM Optimization, Transformer Architecture, Pre-training Techniques, Fine-tuning Methods, Multimodal AI, Natural Language Processing, AI Code Generation, Healthcare AI Applications, Drug Discovery AI, AI Bias and Fairness, Ethical AI Systems, AI Security and Privacy, Real-time AI Inference, Scalable AI Models, Open Source AI Models, Proprietary AI Systems, Energy Efficient AI, AI Model Deployment, Next Generation Artificial Intelligence

Optimizing and Managing Large Language Models

authored by: Meenu Gupta,Awanit Kumar,Nirmal Singh & Rakesh Kumar
ISBN: 9789372190243 | Binding: Hardcover | Pages: 364 | Language: English | Copyright: 2026
Length: 22.9 mm | Breadth: 15.2 mm | Height: 2.890 mm | Imprint: NIPA | Weight: GMS
USD 150.00 USD 135.00
 
This book will be available from 08-Jul-2026

Optimizing and Managing Large Language Models: Foundations, Applications, and Future Directions provides a comprehensive exploration of large language models (LLMs), their architectures, optimization techniques, and real-world applications. The book examines transformer models, retrieval-augmented generation, multimodal learning, and scalability challenges, along with solutions such as model compression and federated learning.

It highlights interdisciplinary applications in healthcare, software engineering, and scientific research while addressing ethical concerns like bias, fairness, and privacy. Covering both theoretical foundations and practical implementations, this volume serves as an essential resource for researchers, professionals, and students, offering insights into the future of intelligent, autonomous AI systems.

Dr. Meenu Gupta is a Professor and Head of Conferences & Research Outreach at the UIE-CSE Department, Chandigarh University, India. She completed her Ph.D. in Computer Science and Engineering with an emphasis on Traffic Accident Severity Problems from Ansal University, Gurgaon, India, in 2020.

She has more than 15 years of teaching experience. Her research areas cover Machine Learning, Intelligent Systems, and Data Mining, with a specific interest in Artificial Intelligence, Image Processing and Analysis, Smart Cities, Data Analysis, and Human/Brain-Machine Interaction (BMI). She has five edited and four authored books. She has also authored or co-authored more than 20 book chapters and over 80 papers in refereed international journals and conferences. She has five filed patents and was awarded the Best Faculty and Department Researcher in 2021 and 2022.

Dr. Awanit Kumar completed his Ph.D. degree in Computer Science and Engineering from Madhav University of Engineering and Technology, Rajasthan. He also holds an Integrated dual degree of B.Tech and M.Tech in Computer Science and Engineering from Mewar University, Rajasthan. He is currently working as an Assistant Professor and Ph.D. Coordinator in the Department of Computer Science & Engineering at Sangam University, Bhilwara, Rajasthan.

He has approximately 9.5 years of combined industry and teaching experience. His research interests include Machine Learning, Ensemble Modeling, Artificial Intelligence, Generative AI, Internet of Things, Wireless Sensor Networks, and Ad-hoc Networks. He is an active reviewer for several SCI-reputed journals, including Informatics in Medicine Unlocked (Elsevier). He has more than 30 publications in SCI-indexed and Scopus journals as well as international conferences.

Dr. Nirmal Singh has completed his Ph.D. from Sangam University and is currently working as an Assistant Professor in the Department of Computer Science & Engineering. He completed his B.Tech in Computer Science & Engineering from JSIMT, Shikohabad, Uttar Pradesh, and his M.Tech from SIMT (currently known as Sanskriti University).

He has 12 years of teaching experience in reputed government organizations such as MNNIT Allahabad and BIET Jhansi. He has more than 10 patents and 10 Scopus-indexed papers. He has edited 2 books and contributed to 5 book chapters. He has also served as convenor for various international conferences at Sangam University. He has received various research incentives from Sangam University as well as the Government of Rajasthan Saras Dairy.

Dr. Rakesh Kumar is an Associate Director at the UIE-CSE Department, Chandigarh University, Punjab, India. He is pursuing his Post-Doctoral Fellowship from MIR Lab, USA. He completed his Ph.D. in Computer Science and Engineering from Punjab Technical University, Jalandhar, in 2017.

He has more than 20 years of teaching experience. His research interests include IoT, Machine Learning, and Natural Language Processing. He has edited more than 12 books with reputed publishers such as Elsevier, Springer, and Taylor & Francis, and has authored 5 books. He serves as a reviewer for several journals, including Big Data, CMC, Scientific Reports, TSP, Multimedia Tools and Applications, and IEEE Access. He is a Senior Member of IEEE. He has authored or co-authored more than 200 publications in national and international conferences and journals. He has also served as an organizer and editor for many international conferences under the aegis of IEEE and AIP.

Chapter 1.Introduction to Large Language Models: Evolution and Breakthroughs
Chapter 2.Foundational Architectures: Transformer Models and Beyond 
Chapter 3.Scaling Laws and Optimization Strategies in LLMs
Chapter 4.Pre-training and Fine-tuning Techniques, Challenges, and Innovations
Chapter 5.Multimodal LLMs: Bridging Text, Vision, and Speech Processing 
Chapter 6.Large Language Models for Software Engineering and Code Generation
Chapter 7.LLMs in Healthcare: Diagnosis, Drug Discovery, and Personalized Medicine
Chapter 8.Bias, Fairness, and Ethical Considerations in LLM Deployment 
Chapter 9.Security and privacy challenges in Large Language Models
Chapter 10.Real-time Inference and Deployment of LLMs at Scale
Chapter 11.Open Source Vs. Proprietary Models: A Comparative Analysis 
Chapter 12.Energy Efficiency and Sustainability in Large-Scale    
Chapter 13.Governance, Optimization, and Life Cycle Management of Large Language Models
Chapter 14.Accelerating LLM Inference: Hardware-Software Co-Design for Real-Time Applications 
Chapter 15.Enhancing Security in AI-Driven Code Generation: A Framework for Safe Integration of Large Language Models 
Chapter 16.Conclusion and Roadmap for Next-generation AI Models

 
2306
Submit Your Email, To Receive Regular Updates. You Can Unsubscribe Anytime