Janardhan Rao ( Jana ) Doppa Huie-Rogers Endowed Chair in Computer Science Berry Distinguished Professor in Engineering Chair, EECS Graduate Studies Associate
Professor School
of EECS, Washington
State University
Office: EME 133
Office Hours: Mon 4-5pm for Fall-2024
Voice: 1-509-335-1846 (Email is the preferred option)
Email: jana.doppa
[AT] wsu.edu
My general research
interests are in the broad field of
artificial intelligence (AI), where I mainly focus on sub-fields of
machine learning, and data-driven science and engineering. Current
focus of my work is:
AI-driven Adaptive Experiment Design with applications
to engineering and scientific domains
Sequential
Decision-making under Uncertainty problems motivated by real-world
applications including agriculture and optimization of cyber-physical
systems such as smart grid and smart health
Robust Machine learning and Decision-making for high-stakes applications
Use-inspired AI research to enable high-impact applications in domains including agriculture, 3D printing, and healthcare
Machine
Learning to improve Electronic Design Automation for
designing high-performance, energy-efficient, and reliable hardware for
large-scale data analysis applications
Optimized Computer
Architectures for Big Data Computing using Emerging Technologies (e.g.,
Through-Silicon-Via / Monolithic 3D integration, Heterogeneous systems,
and Processing-in-Memory cores)
Machine Learning for Sustainable Computing and
Computational Sustainability
Note for
Prospective Students:
I'm always looking for strong, self-motivated, and ambitious
PhD
students. You can find more details here.
For undergrad students
at WSU:
If you are interested in working with me for research experience and/or
honors thesis, please take my data mining class (CptS 315) and
we can discuss
the details along the way. Please read this
article for some useful advice.
Undergrad /
MS Non-Thesis Advising Meetings: Please drop by my office
hours. If my office hours don't work for you, please send me an email
for appointment.
I like to work on artificial intelligence and machine learning problems motivated from
important real-world applications. A sample of my current and
recent research projects include:
AI to Accelerate Science and Engineering
How
can we develop AI methods by combining valuable domain knowledge and
data to accelerate scientific discovery and engineering design?
AI for nanoporous materials (NPMs) design and chemical sciences with our chemistry collaborator Cory Simon @ Oregon State.
We are looking at sustainability applications of NPMs such as (gas
storage) densifying hydrogen - a clean fuel - for compact storage
onboard vehicles; (gas separation) capturing carbon dioxide from flue
gas of coalfired power plants; subsequently sequester it to prevent
global warming; and (gas sensing) detecting toxic compounds and
explosives. See our MSDE-2021 and Digital Discovery-2023 papers for some initial results.
AI
for agricultural decision support to improve water management, farm
management, and harvest management by overcoming labor shortage. Excited to be part of the newly funded NSF-USDA AI Institute for Agriculture Decision Support and lead the AI Thrust on modeling systems of knowns and unknowns.
Machine Learning meets Computing Systems Design: ML for Systems and Systems for ML
How
can we exploit the synergies between machine learning and computing
systems to enable the design of high-performance, energy-efficient, and
reliable computing systems spanning from edge devices to servers to
cloud, which will empower further advances in ML?
Machine
learning methodologies for design space exploration and optimization of
manycore systems to significantly reduce the overall engineering
cost and design time of application-specific hardware. We developed a
theory-guided ML framework by combining the domain knowledge from
manycore designers and training data from hardware simulations (see IEEE TCAD 2017, IEEE TC 2019, and ACM TECS 2019 papers)
Machine
learning for runtime resource management to save power by maintaining
performance and temperature constraints in system-on-chips
(SoCs) ranging from mobile to large-scale manycore systems. We developed
a theory-guided ML framework by using the learned models of design
objectives (e.g., power, performance, and temperature) as a function of
the system state to provide strong supervised training data to
continuously improve the resource management policy (see IEEE TVLSI 2017, IEEE TVLSI 2019, ACM TODAES 2020 , and DAC-2021 papers)
How
can we leverage emerging technologies and machine learning to design
high-performance and energy-efficient manycore systems for Big-Data
workloads? For example, 3D integration is a breakthrough
technology to achieve ``More Moore and More Than Moore,'' and provides
numerous benefits (better performance, lower power consumption, and
higher bandwidth) by utilizing vertical interconnects and 3D stacking;
Processing-in-memory cores enabled by ReRAM will reduce the data
movement; and integrating heterogeneous processing cores of
different types and functional granularity allows us to bridge the
energy efficiency of application-specific hardware with the programmability of general-purpose processors. See IEEE TCAD 2017 for TSV-based 3D manycore systems; ICCD-2017 for benefits of M3D in homogeneous manycore systems; ACM TODAES 2018 for improved thermal and performance trade-offs in M3D based manycore systems; ACM TODAES 2019 for overcoming the challenges due to electrostatic coupling in M3D based manycore systems; IEEE TC 2018 for integrating heterogeneous cores including CPUs and GPUs with conflicting QoS requirements on the same 2D SoC; IEEE TC 2019 for benefits of 3D heterogenous manycore systems ACM JETC 2020 for the benefits of partially connected 3D manycore systems using near-field inductive coupling based interconnect; and ACM TODAES 2021 for benefits of M3D based heterogeneous manycore systems including GPU cores.
How
can we design and optimize high-performance, energy-efficient, and
reliable manycore systems for training a large class of deep
neural networks. See ACM JETC 2018 for 3D manycore architecture with ReRAM based processing-in-memory cores, IEEE TCAD 2021 for training CNNs by synergistically combining the benefits of M3D integration, ReRAM, and GPUs; IEEE TCAD 2021
to improve the 3D ReRAM/GPU architecture by optimising the
normalization layers using BO, ACM TECS 2021 for reliable training of CNNs on unreliable
ReRAM based manycore accelerators, and DATE-2021, IEEE TVLSI 2021, and ICCAD-2021 papers on processing-in-memory based
manycore architectures for high-performance training graph neural
networks.
How
can we create energy-efficient and secure edge AI? We have developed a
general hardware and software co-design framework that is applicable
for problems ranging from simple classification to structured
output prediction (e.g., 3D object shapes from 2D images in AR/VR
applications). See IEEE TCAD 2018, ACM TECS 2019, DAC 2020, ICCAD 2020, and ICCAD 2021 papers for different instantiations. See IEEE TCAD 2020 paper for analyzing and improving the security of Edge AI applications such as smart homes, mobile health, and smart grid.
Structured
Prediction: Algorithms and Applications
How can we learn to predict structured outputs (e.g., sequences,
trees, and graphs)? Structured prediction tasks arise in a variety of
domains including natural language processing (e.g., POS tagging,
dependency parsing, coreference resolution) and computer vision (e.g.,
object detection, semantic segmentation).
HC-Search framework (see JAIR-2014
paper) unifies the cost function and control knowledge learning
frameworks for structured prediction. The effectiveness of this
framework depends on the quality of the search space (the depth at
which target outputs can be located). We have designed the limited
discrepancy search (LDS) space, which is parameterized by a greedy
recurrent classifier or policy, and also its sparse variant to improve
efficiency (see JMLR-2014
paper). We have also designed the randomized segmentation
space
for computer vision tasks, where we probabilistically sample likely
object configurations in the image from a hierarchical segmentation
tree (see Michael's CVPR-2015
paper).
Easy-first
framework learns to make easy prediction decisions first to constrain
the harder decisions akin to constraint satisfaction algorithms. We
have developed a principled optimization-based learning approach for
easy-first framework (see Jun's AAAI-2015
paper).
We have solved a variety of structured prediction
applications including multi-label prediction (see AAAI-2014
paper); coreference resolution within document (see ChaoMa's EMNLP-2014
paper); joint entity and even coreference across documents
(see Jun's AAAI-2015
paper); object detection in challenging biological images,
semantic segmentation of images, and monocular depth estimation from
images (see
Michael's ICCV-2013
workshop paper and CVPR-2015
paper); and activity prediction from sensor data (see Bryan's KDD-2015
paper)
Other researchers' have employed HC-Search to solve various
structured prediction applications: dependency parsing (see IJCAI-2016
and ACL-2016
papers); predicting DDoS attacks (see MLJ-2016
paper)
Deep Language
Understanding (with Chao Ma, Jun Xie, Shahed Sorower,
Walker Orr, Prashanth Mannem, Tom Dietterich, Xiaoli Fern and Prasad
Tadepalli)
How can we build intelligent computer systems that can achieve deep
language understanding? In the Deep
Reading and Learning project, we are trying to learn a
high-level representation called event
graphs (a
form of Abstract Meaning Representation) from raw text. Towards this
goal, we are working on several sub-problems: 1) Entity co-reference
resolution within a document; 2) Joint entity and event co-reference
resolution across documents; 3) Joint models for entity linking and
discovery; and 4) Learning general scripts of events.
See our AAAI2014
paper on script learning, EMNLP2014
paper on
co-reference resolution, and AAAI2015
paper on learning for Easy-first framework. [Funded by DARPA as part of
the DEFT program]
Machine
Reading (with Shahed Sorower, Mohammad NasrEsfahani, Tom
Dietterich, Xiaoli Fern and Prasad Tadepalli)
How can we learn relational world knowledge rules (e.g., Horn clauses)
from natural texts to support textual inference? Natural texts are
radically incomplete (writers don't mention redundant information) and
systematically biased (writers mention exceptions to avoid the readers
from making incorrect inferences), which makes the rule learning very
hard. We solve this problem by modeling the pragmatic relationship
between what rules exist and what things will be mentioned (e.g.,
Gricean maxims). We worked with BBN and other
researchers from CMU, University of Washington and ISI. See
our NIPS2011
and ACML2011
papers
for details. [Funded by DARPA as part of the Machine Reading program]
Integrated Learning (with Tom
Dietterich and Prasad
Tadepalli)
How can we integrate information from multiple sources to learn better
? In the past, we worked on DARPA's Integrated learning
project,
where the goal was to learn a complex problem solving task from
a single demonstration of the expert. We learned the cost function that
the expert is minimizing while producing the demonstration by
formulating it as an inverse optimization problem. Our component's name
was
DTLR (Decision Theoretic Learner and Reasoner). We worked with other
researchers
from
Lockheed-Martin, ASU, RPI, UMD, UMASS, UIUC and Georgia Tech. See our TIST2012
paper for details. [Funded by DARPA]
Publications
Active Learning for Derivative-Based Global Sensitivity Analysis with Gaussian Processes
Syrine Belakaria, Ben Letham, Janardhan Rao Doppa, Barbara Engelhardt, Stefano Ermon, and Eytan Bakshi
To appear in Thirty-Eighth Conference on Neural Information Processing Systems (NeurIPS), 2024
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), 2022 Best Paper
Award @ ACM/IEEE Embedded Systems Week Conference
Select-and-Evaluate: A Learning Framework for
Large-Scale Knowledge Graph Search
F A Rezaur Rahman Chowdhury*, Chao Ma*, Md Rakibul Islam,
Mohammad Hossein Namaki, Mohammad Omar Faruk, and Janardhan Rao Doppa
(* denotes equal contribution)
Proceedings
of Machine Learning Research (PMLR), Vol 77, pp 129-144, 2017
An Ensemble Architecture for Learning
Complex Problem Solving Techniques from Demonstration
Xiaoqin Zhang, Bhavesh Shrestha, Sung Wook Yoon, Subbarao
Kambhampati, Phillip DiBona, Jinhong K. Guo, Daniel McFarlane, Martin
O. Hofmann, Kenneth R. Whitebread, Darren Scott Appling, Elizabeth T.
Whitaker, Ethan Trewhitt, Li Ding, James Michaelis, Deborah L.
McGuinness, James A. Hendler, Janardhan Rao Doppa, Charles Parker,
Thomas G. Dietterich, Prasad Tadepalli, Weng-Keen Wong, Derek T. Green,
Antons Rebguns, Diana F. Spears, Ugur Kuter, Geoffrey Levine, Gerald
DeJong, Reid MacTavish, Santiago Ontanon, Jainarayan Radhakrishnan,
Ashwin Ram, Hala Mostafa, Huzaifa Zafar, Chongjie Zhang, Daniel D.
Corkill, Victor R. Lesser, and Zhexuan Song
ACM
Transactions on
Intelligent Systems and Technology , vol 3, issue 4, article 75, pp
1-38 (TIST-2012)
An Ensemble Learning and Problem Solving
Architecture for Airspace
Management
Xiaoqin Zhang, Sung Wook Yoon, Phillip
DiBona, Darren Scott Appling, Li Ding, Janardhan Rao Doppa, Derek T.
Green, Jinhong K. Guo, Ugur Kuter, Geoffrey Levine, Reid MacTavish,
Daniel McFarlane, James Michaelis, Hala Mostafa, Santiago Ontanon,
Charles Parker, Jainarayan Radhakrishnan, Antons Rebguns, Bhavesh
Shrestha, Zhexuan Song, Ethan Trewhitt, Huzaifa Zafar, Chongjie Zhang,
Daniel D. Corkill, Gerald DeJong, Thomas G. Dietterich, Subbarao
Kambhampati, Victor R. Lesser, Deborah L. McGuinness, Ashwin Ram, Diana
F. Spears, Prasad Tadepalli, Elizabeth T. Whitaker, Weng-Keen Wong,
James A. Hendler, Martin O. Hofmann, and Kenneth R. Whitebread
Proceedings of AAAI Conference on
Innovative Applications of Artificial
Intelligence (IAAI), 2009
CptS 570
Machine Learning (Fall 2014; Fall 2015; Fall 2016; Fall
2017; Fall 2018; Fall 2019; Fall 2020; Fall 2021; Fall 2022; Fall 2023)
CptS 577
Structured Prediction and Intelligent Decision-Making (Spring 2015; Spring 2016; Spring 2017; Spring 2018; Spring
2019) --
strong focus on Electronic Design Automation domain in
addition to
NLP and Vision
CptS 315
Introduction to Data Mining (Spring 2018; Spring 2019; Spring 2020; Spring 2021; Spring 2022; Spring 2023)
In the past, I was Instructor for the following courses:
CS/ECE 507:
Introduction to Graduate School (aka Research Methods in Computer
Science,
partly based on Prof. Tom Dietterich's Fall
1996 course),
Oregon State University. (Fall 2010, Fall 2011, Fall 2012, Fall 2013)
CS 261: Data
Structures,
Oregon State University. (Summer
2007)
Summer
School on Data
structures and
Algorithms, IIT Kanpur. (Summer 2006)
Object
Oriented Programming
with C++, IIT Kanpur.
(Winter 2006)
C101:
Introduction to Computing, IIT
Kanpur. (Fall
2005)
Current Research
Group
I'm fortunate to work with the below group of students.
Keynote Talk,
Exploiting Synergies between AI and Computing Systems for Sustainable
Computing, International Conference on Distributed Computing and
Intelligent Technology, 2024
Invited Talk, Research Methods in Data Science, ACM CODS-COMAD Conference, 2024
Bayesian Optimization over Combinatorial Spaces: Progress and
Outstanding Challenges, Amazon Science Seminar 2023 (w/ Aryan Deshwal)
Bayesian Optimization over Combinatorial Spaces: Progress and Outstanding Challenges, INFORMS Annual Meeting 2023 (w/ Aryan Deshwal)
Hardware and Software Co-Design for Edge AI Tutorial at DAC 2023 (w/ Umit Ogras and Priya Panda)
Hardware-aware AI Algorithms to Improve Resource-Efficiency at DAC 2023
Bayesian Optimization over Combinatorial Spaces: Progress and Outstanding Challenges, Invited Talk at Meta Research
Editorial Board Member (Elected), Machine Learning Journal (2021 - present)
Editorial Board Member (Elected), Journal
of Artificial Intelligence Research (2016 - present)
Associate Editor, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2024 - present)
Associate Editor, Frontiers in AI and Machine Learning (2023 - present)
Associate Editor, IEEE Design and Test of Computers (2022 - present)
Guest Editor, IEEE Design and Test of Computers, Special
issue on Smart and
Autonomous Systems for Sustainability: Sustainable Computing and
Computing for Sustainability (2018 - 2019)
Track Chair, Area Chair, and Senior Program
Committee Member:
Area Chair: Main Track, AAAI National Conference on Artificial Intelligence (AAAI),
2025
Area Chair: International Conference on Learning Representations (ICLR), 2025
Area Chair: Annual Conference on Neural Information Processing Systems
(NeurIPS), 2024
Area Chair: International Conference on Machine Learning (ICML), 2024
Area Chair: International Conference on Learning Representations (ICLR), 2024
Area Chair: Main Track, AAAI National Conference on Artificial Intelligence (AAAI),
2024
Area Chair: Annual Conference on Neural Information Processing Systems
(NeurIPS), 2023
Area Chair: International Conference on Machine Learning (ICML), 2023
Area Chair: International Conference on Learning Representations (ICLR), 2023
Area Chair: Main Track, AAAI National Conference on Artificial Intelligence (AAAI),
2023
Area Chair: Asian Conference on Machine Learning
(ACML), 2023
Area Chair: International Conference on Machine Learning (ICML), 2022
Area Chair: Annual Conference on Neural Information Processing Systems
(NeurIPS), 2022
Area Chair: Asian Conference on Machine Learning
(ACML), 2022
Track Chair for
AI Systems and Applications of AI at Edge: International Conference on
Compilers, Architectures, and Synthesis for Embedded Systems (CASES),
2022
Track Chair for AI and ML Systems: Design Automation Conference (DAC), 2022
Area Chair: International Conference on Learning Representations (ICLR), 2021
Area Chair: International Conference on Machine Learning (ICML), 2020
SPC: AI for Social Impact Track, AAAI National Conference on Artificial Intelligence (AAAI),
2024
SPC: International Joint Conference on Artificial Intelligence
(IJCAI), 2023
SPC: AI for Social Impact Track, AAAI National Conference on Artificial Intelligence (AAAI),
2023
SPC: Main Track, AAAI National Conference on Artificial Intelligence (AAAI),
2022
SPC: AI for Social Impact Track, AAAI National Conference on Artificial Intelligence (AAAI),
2022
SPC: International Joint Conference on Artificial Intelligence
(IJCAI), 2022
SPC: International Joint Conference on Artificial Intelligence
(IJCAI), 2021
SPC: AAAI National Conference on Artificial Intelligence (AAAI),
2021
SPC: International Joint Conference on Artificial Intelligence
(IJCAI), 2020
SPC: AAAI National Conference on Artificial Intelligence (AAAI),
2019
SPC: AAAI National Conference on Artificial Intelligence (AAAI),
2018
SPC: International Joint Conference on Artificial Intelligence
(IJCAI), 2016
Program Committee Member:
ACM/IEEE Design Automation Conference (DAC), 2025
IEEE/ACM International Conference on Design Automation
and Test in Europe (DATE), 2025
ACM/IEEE Design Automation Conference (DAC), 2024
IEEE/ACM International Conference on Design Automation
and Test in Europe (DATE), 2024
ACM/IEEE International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES), 2023
ACM/IEEE International Conference on Hardware/Software Codesign and System Synthesis (CODES), 2023
International Conference on Artificial Intelligence and
Statistics (AISTATS), 2022
International Conference on Machine Learning (ICML), 2021
Annual Conference on Neural Information Processing Systems
(NeurIPS), 2021
International Conference on Artificial Intelligence and
Statistics (AISTATS), 2021
International Conference on Uncertainty in Artificial
Intelligence (UAI), 2021
ACM SIGKDD Conference on Knowledge Discovery and Data
Mining (KDD), 2021
International Conference on Automated Planning and
Scheduling (ICAPS), 2021
58th Design Automation Conference (DAC), 2021
24th IEEE/ACM International Conference on Design Automation
and Test in Europe (DATE), 2021
ACM/IEEE International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES), 2021
IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), 2021
Annual Conference on Neural Information Processing Systems
(NeurIPS), 2020
AAAI National Conference on Artificial Intelligence (AAAI),
2020
International Conference on Uncertainty in Artificial
Intelligence (UAI), 2020
ACM SIGKDD Conference on Knowledge Discovery and Data
Mining (KDD), 2020
International Conference on Artificial Intelligence and
Statistics (AISTATS), 2020
International Conference on Automated Planning and
Scheduling (ICAPS), 2020
SIAM International Conference on Data Mining (SDM), 2020
57th Design Automation Conference (DAC), 2020
ACM/IEEE International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES), 2020
ACM/IEEE International Conference on Hardware/Software Codesign and System Synthesis (CODES), 2020
International Conference on Machine Learning (ICML), 2019
Annual Conference on Neural Information Processing Systems
(NeurIPS), 2019
International Joint Conference on Artificial Intelligence
(IJCAI), 2019
ACM SIGKDD Conference on Knowledge Discovery and Data
Mining (KDD), 2019
International Conference on Artificial Intelligence and
Statistics (AISTATS), 2019
International Conference on Uncertainty in Artificial
Intelligence (UAI), 2019
International Conference on Automated Planning and
Scheduling (ICAPS), 2019
ACM/IEEE International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES), 2019
ACM/IEEE International Conference on Hardware/Software Codesign and System Synthesis (CODES), 2019
ACM/IEEE International Network-on-Chip Symposium (NOCS),
2019
International
Workshop on Graphs, Architectures, Programming, and Learning (GrAPL),
held in conjunction with IPDPS Conference, 2019
International Conference on Machine Learning (ICML), 2018
Annual Conference on Neural Information Processing Systems
(NeurIPS), 2018
International Joint Conference on Artificial Intelligence
(IJCAI), 2018
ACM SIGKDD Conference on Knowledge Discovery and Data
Mining (KDD), 2018
International Conference on Artificial Intelligence and
Statistics (AISTATS), 2018
International Conference on Automated Planning and
Scheduling (ICAPS), 2018
ACM/IEEE International Network-on-Chip Symposium (NOCS),
2018
AAAI Student Abstract Program, 2018
International Conference on Machine Learning (ICML), 2017
Annual Conference on Neural Information Processing Systems
(NIPS), 2017
AAAI National Conference on Artificial Intelligence (AAAI),
2017
International Conference on Uncertainty in Artificial
Intelligence (UAI), 2017
International Conference on Artificial Intelligence and
Statistics (AISTATS), 2017
International Conference on Automated Planning and
Scheduling (ICAPS), 2017
EMNLP Workshop on Structured Prediction for Natural
Language Processing, 2017
AAAI Student Abstract Program, 2017
International Conference on Machine Learning (ICML), 2016
Annual Conference on Neural Information Processing Systems
(NIPS), 2016
AAAI National Conference on Artificial Intelligence (AAAI),
2016
ACM SIGKDD Conference on Knowledge Discovery and Data
Mining (KDD), 2016
International Conference on Artificial Intelligence and
Statistics (AISTATS), 2016
International Conference on Automated Planning and
Scheduling (ICAPS), 2016
International Conference on Computational Linguistics
(COLING), 2016
EMNLP Workshop on Structured Prediction for Natural
Language Processing, 2016
AAAI Doctoral Consortium, 2016
AAAI Student Abstract Program, 2016
Annual Conference on Neural Information Processing Systems
(NIPS), 2015
ACM SIGKDD Conference on Knowledge Discovery and Data
Mining (KDD), 2015
International Joint Conference on Artificial Intelligence
(IJCAI), 2015
AAAI National Conference on Artificial Intelligence (AAAI),
2015
International Conference on Artificial Intelligence and
Statistics (AISTATS), 2015
AAAI Student Abstract Program, 2015
International Conference on Machine Learning (ICML), 2014
European Conference on Machine Learning (ECML), 2014
ECML Workshop on Multi-Target Prediction, 2014
CVPR Workshop on Computational Models of Social
Interactions and
Behavior: Scientific Grounding, Sensing, and Applications, 2014
International Conference on Machine Learning (ICML), 2013
International Joint Conference on Artificial Intelligence
(IJCAI), 2013
AAAI National Conference on Artificial Intelligence (AAAI),
2013
European Conference on Machine Learning (ECML), 2013
ICCV Workshop on Understanding Human Activities: Context
and Interactions, 2013
AAAI National Conference on Artificial Intelligence (AAAI),
2012
ECML Workshop on Collective Learning and Inference on
Structured
Data (CoLISD), 2012
International Joint Conference on Artificial Intelligence
(IJCAI), 2011
AAAI National Conference on Artificial Intelligence (AAAI),
2011
ECML Workshop on Collective Learning and Inference on
Structured
Data (CoLISD), 2011
Sir. C.V. Raman Educational Award, awarded by the State
Govt. of Andhra Pradesh, India, 1998
Personal
I'm passionate about cricket.
Playing cricket helps me remain sane amidst the hectic research
life. I try to play in the nearby cricket leagues during the
summers. I played for OSU Cricket club in 2007, 2008 and 2009.
Our
team Chak De
Oregon won
the 2009 NWCL cricket championship. In 2010, I played
for Chak De
Oregon in NWCL (Div I) and for Portland
club in
OCL. We won the 2010
OCL T20 championship. In 2011, I played for only Portland club
in OCL as part of the budget cut on cricket. After 2011 season I became
very busy and could not justify my time spent on cricket, so I stopped
playing. I used to maintain my cricket
scores
here.
I like to cook, but I don't like to spend too much time on it.
So I follow an engineering methodology for cooking, which provides a
good trade off between preparation time and quality of the
food! Does this remind you of my research work on trading off
computation time and quality of the predictions? :)