jana-pic     Janardhan Rao ( Jana ) Doppa
   
Huie-Rogers Endowed Chair in Computer Science
    Chair, EECS Graduate Studies
    Associate Professor
    School of EECS, Washington State University
    Office: EME 133
    Office hours: By appointment for Sping-2024
    Voice: 1-509-335-1846 (Email is the preferred option)
    Email: jana.doppa [AT] wsu.edu 

                           LATEST UPDATES

My general research interests are in the broad field of artificial intelligence (AI), where I mainly focus on sub-fields of machine learning, and data-driven science and engineering. Current focus of my work is:
I did my PhD with the Artificial Intelligence group at Oregon State University, where I was wisely advised by Prof. Prasad Tadepalli and Prof. Alan Fern . I'm fortunate to work with the inimitable Prof. Alan Fern and I enjoy working with him a lot!

Note for Prospective Students: I'm always looking for strong, self-motivated, and ambitious PhD students. You can find more details here.
For undergrad students at WSU: If you are interested in working with me for research experience and/or honors thesis, please take my data mining class (CptS 315) and we can discuss the details along the way. Please read this article for some useful advice.

Undergrad / MS Non-Thesis Advising Meetings: Please drop by my office hours. If my office hours don't work for you, please send me an email for appointment.

Quick Links:   [  Research  ]   [  Publications  ]   [  Teaching  ]   [ Students ]  [  Awards and Honors  ]   [  Professional Service  ]   [  Reading Groups  ]   [  Personal  ]


Education

Ph.D., Computer Science, Oregon State University, 2014
M.Tech., Computer Science, Indian Institute of Technology, Kanpur, India, 2006

Research

I like to work on artificial intelligence and machine learning problems motivated from important real-world applications.  A sample of my current and recent research projects include:
How can we develop AI methods by combining valuable domain knowledge and data to accelerate scientific discovery and engineering design?
How can we exploit the synergies between machine learning and computing systems to enable the design of high-performance, energy-efficient, and reliable computing systems spanning from edge devices to servers to cloud, which will empower further advances in ML?
Collaborators: Partha Pande @ WSU, Krish Chakrabarty @ DukeHelen Li @ DukeDeuk Heo @ WSU, Ganapati Bhat @ WSU, Umit Ogras @ UW Madison, Paul Bogdan @ USCMike Kishinevsky @ Intel Research, Radu Marculescu @ CMU, and Diana Marculescu @ CMU
How can we learn to predict structured outputs (e.g., sequences, trees, and graphs)? Structured prediction tasks arise in a variety of domains including natural language processing (e.g., POS tagging, dependency parsing, coreference resolution) and computer vision (e.g., object detection, semantic segmentation).
How can we build intelligent computer systems that can achieve deep language understanding? In the Deep Reading and Learning project, we are trying to learn a high-level representation called event graphs (a form of Abstract Meaning Representation) from raw text. Towards this goal, we are working on several sub-problems: 1) Entity co-reference resolution within a document; 2) Joint entity and event co-reference resolution across documents; 3) Joint models for entity linking and discovery; and 4) Learning general scripts of events. See our AAAI2014 paper on script learning, EMNLP2014 paper on co-reference resolution, and AAAI2015 paper on learning for Easy-first framework. [Funded by DARPA as part of the DEFT program] How can we learn relational world knowledge rules (e.g., Horn clauses) from natural texts to support textual inference? Natural texts are radically incomplete (writers don't mention redundant information) and systematically biased (writers mention exceptions to avoid the readers from making incorrect inferences), which makes the rule learning very hard. We solve this problem by modeling the pragmatic relationship between what rules exist and what things will be mentioned (e.g., Gricean maxims).  We worked with BBN and other researchers from CMU, University of Washington and ISI. See our NIPS2011 and ACML2011 papers for details. [Funded by DARPA as part of the Machine Reading program] How can we integrate information from multiple sources to learn better ? In the past, we worked on DARPA's Integrated learning project, where the goal was to learn a complex problem solving task from a single demonstration of the expert. We learned the cost function that the expert is minimizing while producing the demonstration by formulating it as an inverse optimization problem. Our component's name was DTLR (Decision Theoretic Learner and Reasoner). We worked with other researchers from Lockheed-Martin, ASU, RPI, UMD, UMASS, UIUC and Georgia Tech. See our TIST2012 paper for details. [Funded by DARPA]

Publications 

  •  Learning Algorithms for Link Prediction based on Chance-Constraints
  • Janardhan Rao Doppa, Jun Yu, Prasad Tadepalli, and Lise Getoor
  • Proceedings of European Conference on Machine Learning (ECML), 2010
  • PDF
  •  Towards Learning Rules from Natural Texts
  • Janardhan Rao Doppa, Mohammad Nasresfahani, Mohammad S. Sorower, Thomas G. Dietterich, Xiaoli Fern, and Prasad Tadepalli
  • Proceedings of NAACL 2010 Workshop on Formalisms and Methodologies in Learning by Reading.
  • PDF
  •  
  • Chance-Constrained Programs for Link Prediction
  • Janardhan Rao Doppa, Jun Yu, Prasad Tadepalli, and Lise Getoor
  • Proceedings of NIPS 2009 Workshop on Analyzing Networks and Learning with Graphs.
  • PDF
  •   
  • An Ensemble Learning and Problem Solving Architecture for Airspace Management
  • Xiaoqin Zhang, Sung Wook Yoon, Phillip DiBona, Darren Scott Appling, Li Ding, Janardhan Rao Doppa, Derek T. Green, Jinhong K. Guo, Ugur Kuter, Geoffrey Levine, Reid MacTavish, Daniel McFarlane, James Michaelis, Hala Mostafa, Santiago Ontanon, Charles Parker, Jainarayan Radhakrishnan, Antons Rebguns, Bhavesh Shrestha, Zhexuan Song, Ethan Trewhitt, Huzaifa Zafar, Chongjie Zhang, Daniel D. Corkill, Gerald DeJong, Thomas G. Dietterich, Subbarao Kambhampati, Victor R. Lesser, Deborah L. McGuinness, Ashwin Ram, Diana F. Spears, Prasad Tadepalli, Elizabeth T. Whitaker, Weng-Keen Wong, James A. Hendler, Martin O. Hofmann, and Kenneth R. Whitebread
  • Proceedings of AAAI Conference on Innovative Applications of Artificial Intelligence (IAAI), 2009
  • PDF

  • Teaching

    Courses at WSU:

    In the past, I was Instructor for the following courses:


    Current Research Group

    I'm fortunate to work with the below group of students.

    Former Students and Postdocs


    Professional Service

       Tutorials, Invited Talks, and Special Sessions:
        Conference Organization:
        Journal Editorial Service:
        Track Chair, Area Chair, and Senior Program Committee Member:
        Program Committee Member:
        Reviewer:

    Machine Learning Reading Group (MLRG)

    At WSU, I often run focused reading groups on topics related to my current projects.
    At OSU, I organized and led several reading groups on a wide variety of topics (2009-2014). Some of them include:

    Awards and Honors


    Personal

    I'm passionate about cricket. Playing cricket helps me remain sane amidst the hectic research life. I try to play in the nearby cricket leagues during the summers. I played for OSU Cricket club in 2007, 2008 and 2009. Our team Chak De Oregon won the 2009 NWCL cricket championship.  In 2010, I played for Chak De Oregon in NWCL (Div I) and for Portland club in OCL. We won the 2010 OCL T20 championship. In 2011, I played for only Portland club in OCL as part of the budget cut on cricket. After 2011 season I became very busy and could not justify my time spent on cricket, so I stopped playing. I used to maintain my cricket scores here

    I like to cook, but I don't like to spend too much time on it. So I follow an engineering methodology for cooking, which provides a good trade off between  preparation time and quality of the food! Does this remind you of my research work on trading off computation time and quality of the predictions? :)