The success of AI has created incentives in more sensitive areas such as health, military, policy making among others. The use of AI in such sensitive fields implies new fundamental challenges that need to be tackled. Will a doctor trust AI predictions to confirm a severe diagnosis of a patient? How to encourage people to start more widely adopting AI systems for mundane, routine tasks? To study these general questions, a gap between Artificial Intelligence, Human-Computer Interaction, and Cognitive sciences needs to be filled. This thesis aims to understand what guides acceptability, trust, and decision making of human-users when interacting with AI-based systems. More precisely, this thesis aims to develop: 1) understanding on how users make the decisions when using AI-based systems at different levels of granularity: methods, tasks and systems. The approach relies on theories and models from cognitive sciences research about human decisions, judgment, and perception applied to Human-AI interaction 2) designing, implementing and evaluating novel interaction and visualization methods that favor informed decision making with AI-based systems. The approach relies on models and principles from Human-Computer Interaction to display with recommendations. The use case considered in this project is recommendation systems for medical diagnosis. This context is sensitive as it has consequences on the patient, challenging as the decisions should be made sometimes in a couple of seconds (e.g. in surgery) and relevant as it involves expert users dealing with their own expertise and AI knowledge