In a recent post announcing Sam Altman’s dismissal, OpenAI cited a loss of confidence in the CEO’s leadership after conducting review, revealing that he “was not consistently candid in his communications with the board.” Meanwhile, there are voices associating the firing with the heightened risks of advanced AI. Elon Musk’s concern echoed the sentiment that Altman’s firing was somehow linked to the dangers posed by AI. Within a remarkably short timeframe following his dismissal, Microsoft swiftly recruited Sam Altman to lead their newly established advanced artificial intelligence team, following unsuccessful efforts to reinstate him as the CEO of OpenAI. In a CNBC interview, Microsoft CEO Satya Nadella said that the priority is to make sure they continue to innovate and never stop innovating.
Amidst these rapid dynamics and changes within the world’s leading AI company and the clear intention to keep innovating, a crucial and unresolved matter takes center stage: how to effectively navigate the AI development. The intricacies of dealing with AI, its impact, risks, and ethical considerations remain critical themes that demand attention and resolution.
What do we know about AI so far?
AI technology presents a significant challenge as it transitions from an agent that models reality to one that alters such reality. This technology empowers software and computer programs to execute intricate actions in unstructured environments, mimicking human tasks across diverse fields like medicine, transportation, logistic, finance, and contentious areas like defense systems. The question of whether AI can surpass human capabilities is no longer theoretical.
AI functions like a power-up, significantly amplifying productivity. Humans, for instance, cannot simultaneously handle ten different tasks or read twenty books at once. AI has the ability to operate multiple applications simultaneously, a capability beyond human capacity and akin to a superpower. It enhances productivity to unprecedented levels, tackling massive tasks that even crowds of humans may not accomplish.
While AI used to be limited to specific tasks in structured environments such as playing chess, the later development of AI includes tackling unstructured environments, such as finding the shortest path to a destination or facilitating algorithmic trading and algorithmic advice in finance.
Today’s AI might still fall short when it comes to running a government or managing other complex organizations. But envision AI processing vast and diverse datasets, including our very dynamics and diverse language (which is already the case today with Chat GPT), unstructured environments, encompassing intricate contexts, it will undoubtedly expand its scope and functionality.
One of the most intriguing aspects of AI is its ability to make judgments. In today’s context, AI serves as a tool that significantly influences human judgment. For instance, we entrust machines to analyze issues such as a malfunctioning engine in a Tesla car or an aircraft. Similarly, we often rely more on Google Maps than on directions provided by strangers at specific locations. Judgment is an integral part of human natural intelligence.
Before humans come to a judgment, they have to select different cognitive information from their unstructured environment to finally make a decision. In a niche context, using technology to solve our problems is fascinating. However, in the wider context of our complex societies, if we mistrust our own intelligence in forming judgments and leave this task to AI, it prompts us to reconsider whether: (a) we have a comprehensive understanding of our natural intelligence and (b) we understand the complete implications of what artificial intelligence can do, both inherently and in its consequences for our world.

Open-source vs. closed-source AI
OpenAI originated as an open-source, non-profit entity aimed at countering Google’s influence, yet it has transformed into a closed-source, profit-driven organization under Microsoft’s control. Given the known and yet to be known risks of AI, is it better to have open-source or closed-source AI?
Transition from open- to closed-source AI models indicates a shift from a non-profit, transparent development process to a profit-centric, more restrictive approach. Open-source AI allows community involvement, akin to a democratic process, fostering collaborative input and modification. Meanwhile, closed-source AI can provide dedicated support, customization, and proprietary tools tailored to specific applications or industries.
In today’s AI development, the AI industry not only working on Generative AI, but also the larger purpose, like Artificial General Intelligence (AGI). As far as AGI is concerned, no one knows exactly what they are dealing with. They only working around to discover the possibilities and opportunities that arise from it. In a closed source AI, the conversation of what AI is capable and allowed of doing only revolves in a closed loop of technologists, management, perhaps investors and shareholders. It is difficult to mitigate AI risks in a closed loop. Mitigating AI risks necessitates open source models, enabling widespread participation, modification, and proactive risk management.
Ilya Sutskever explained the discussion about open- versus closed-source AI (Stanford eCorner). He argued that open-source AI is beneficial to prevent a concentration of power in the hands of those developing the AI. On the other hand, he argued that if AI eventually becomes very powerful, it would obviously be irresponsible to keep it open-source. This discussion resembles a contemplation over whether humanity would favor a democracy or an autocracy/dictatorship when confronted with overwhelming power.

Yuval Noah Harari mentioned democracy as a self-correcting mechanism, of which there are many (e.g. a free press, academics, independent courts, etc.), while a dictatorship usually has no self-correcting mechanisms. He believed that even a dictator can make good decisions, but not forever, because sooner or later everyone makes mistakes. He concluded, in a dictatorship, it is difficult to reverse wrong decisions. It is important for democracy to have a constant conversation by maintaining self-correcting mechanisms (Endgame).
Conclusion
In the recent unfolding of events within OpenAI’s management and subsequent developments, a deeper concern lingers around the implications of advanced AI. The complexities, risks, and ethical dilemmas entwined with AI technology remain pivotal subjects that necessitate immediate attention and resolution. AI can remarkably enhance productivity and judgment capabilities. However, if we entirely rely on AI for decision-making, we need to question our understanding of natural intelligence and the far-reaching implications for our world.
How should we best deal with AI? The debate between open-source and closed-source AI models reflects a broader discussion about the governance of immense power and resembles the choice between democracy and autocracy. Reflecting Harari’s concept of self-correcting mechanisms, this narrative underscores the urgent need for democracy, including in the field of AI. We need an ongoing dialog not only among technologists who can contribute to the AI development, but involving various societal stakeholders to ensure collective participation and to comprehensively address the problems in the field of AI.
Within a democratic sphere, participation also should not exclude civil society, academia, government, and investor who integrate the principles of impact investing into their investments. This inclusive approach aims not only for enhanced representation of humanity but also for a more significant positive impact on the environment as a whole.