Between Codes and Hearts: Research and Artificial Intelligence Towards a More Human Technology DOI: https://doi.org/10.37843/rted.v18i2.724

Main Article Content

Dra. Garcia-Heredia, N. B.
US
https://orcid.org/0009-0007-1404-0213
Dra. Mujica-Sequera, R. M.
US
https://orcid.org/0000-0002-2602-5199

Abstract

Technology, beyond its instrumental component, must incorporate affective and ethical dimensions that reinforce empathy along with social solidarity. The objective of this essay was to analyze how algorithmic processes can be articulated with the ethical and affective perspectives inherent to the human experience. To this end, the essay is framed within a humanist paradigm using an inductive method, with a qualitative, interpretive approach, and a narrative topic design. Throughout the text, the philosophical foundations of artificial intelligence (AI), its applications in contexts of social interaction, and its moral implications are reflected upon. Specific examples of intelligent systems oriented toward care or education are examined, contrasted with innovations of a strictly functional nature. The essay also raises the need for an interdisciplinary approach that integrates perspectives from psychology, sociology, and data science. Finally, it concludes that only through an ongoing dialogue between codes and hearts can technological development truly serve human dignity.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
Garcia-Heredia, N. B., & Mujica-Sequera, R. M. (2025). Between Codes and Hearts: Research and Artificial Intelligence Towards a More Human Technology. Docentes 2.0 Journal, 18(2), 337–345. https://doi.org/10.37843/rted.v18i2.724
Section
Articles

Citaciones del Artículo



References

Adams J. (2023). Defending explicability as a principle for the ethics of artificial intelligence in medicine. Med Health Care Philos, 26(4), 615-623. DOI: 10.1007/s11019-023-10175-7. DOI: https://doi.org/10.1007/s11019-023-10175-7

Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press.

Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(2), 13–27. https://doi.org/10.1177/1461444816676645 DOI: https://doi.org/10.1177/1461444816676645

Bajtín, M. M. (1982). The dialogic imagination: Four essays (M. Holquist, Ed.; C. Emerson & M. Holquist, Trans.). University of Texas Press.

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. https://dx.doi.org/10.2139/ssrn.2477899 DOI: https://doi.org/10.2139/ssrn.2477899

Bentley, P. J. (1999). Evolutionary Design by Computers. Morgan Kaufmann. DOI: https://doi.org/10.1007/978-1-4471-0819-1_8

Cavoukian, A. (2011). Privacy by Design: The 7 Foundational Principles. Information and Privacy Commissioner of Ontario.

Cheong, B. C. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6, 1421273. https://doi.org/10.3389/fhumd.2024.1421273 DOI: https://doi.org/10.3389/fhumd.2024.1421273

Cloud Security Alliance. (2021). Security Guidance for Critical Areas of Focus in Cloud Computing v4.0.

Couldry, N., & Mejías, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press. DOI: https://doi.org/10.1515/9781503609754

Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer Nature. DOI: https://doi.org/10.1007/978-3-030-30371-6

Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. https://doi.org/10.1038/nature21056 DOI: https://doi.org/10.1038/nature21056

European Commission. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). Official Journal of the European Union, 1–88. https://n9.cl/bkdv7

European Union. (2024). Artificial Intelligence Act (EU Regulation 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence and amending certain Union legislative acts). Official Journal of the European Union. https://n9.cl/k1ka6

Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015). Certifying and removing disparate impact. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 259–268. https://doi.org/10.1145/2783258.2783311 DOI: https://doi.org/10.1145/2783258.2783311

Floridi, L. (2013). The ethics of information. Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780199641321.001.0001

Gentry, C. (2009). A fully homomorphic encryption scheme. Stanford University.

Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 29, 3315–3323. https://doi.org/10.48550/arXiv.1610.02413

Henderikx, P., Kreijns, K., & Kalz, M. (2017). Refining success and dropout in massive open online courses based on the intention–behavior gap. Distance Education, 38(3), 353–368. https://doi.org/10.1080/01587919.2017.1369006 DOI: https://doi.org/10.1080/01587919.2017.1369006

IEEE. (2020). Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (2nd ed.). IEEE.

ISO/IEC 27001:2013. (2013). Information technology — Security techniques — Information security management systems — Requirements. International Organization for Standardization.

ISO/IEC JTC 1/SC 42. (2025). Information technology. Artificial intelligence (AI) AI system impact assessment (ISO/IEC 42005:2025). International Organization for Standardization.

Lipton, Z. C. (2016). The Mythos of Model Interpretability. arXiv. https://doi.org/10.48550/arXiv.1606.03490

Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education.

McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 54, 1273–1282. https://n9.cl/3eh3z

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607 DOI: https://doi.org/10.1145/3457607

Norman, D. A. (2013). The Design of Everyday Things (Revised and expanded ed.). Basic Books.

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. DOI: 10.1126/science.aax234 DOI: https://doi.org/10.1126/science.aax2342

Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics – Part A, 30(3), 286–297. DOI:10.1109/3468.844354 DOI: https://doi.org/10.1109/3468.844354

Raymond, E. S. (1999). The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary. O’Reilly Media.

Rose, S., Borchert, O., Mitchell, S., & Connelly, S. (2020). Zero Trust Architecture. NIST Special Publication 800-207. DOI: https://doi.org/10.6028/NIST.SP.800-207

Shneiderman, B. (2020). Bridging the gap between ethics and practice: Guidelines for human-centered AI. IEEE Computer, 53(10), 83–90. https://doi.org/10.1145/3419764 DOI: https://doi.org/10.1145/3419764

Topol, E. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books.

Unesco. (2021). Recommendation on the Ethics of Artificial Intelligence. UNESCO Publishing.

van Dijk, T. A. (2000). El discurso como interacción social. Gedisa.

Volóshinov, V. N. (1976). Marxism and the philosophy of language (L. Matejka & I. R. Titunik, Trans.). Harvard University Press.

Wachter, S., & Mittelstadt, B. D. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 2019(2), 494–620. https://n9.cl/9ihwj0

Únete a nuestro canal de Telegram para recibir notificaciones de nuestras publicaciones