In this article, I want to share some insights from the academic literature about the potential ways in which artificial intelligence (AI) may shape the future of self-management. AI is a set of emerging technologies built on learning algorithms, computational power and statistical techniques that can approximate the output of knowledge workers.
Technologies using AI are currently evolving at a dizzying speed. As organizations continue to experiment with them in unexpected ways, new forms of organizations often emerge, such as what we have seen with Uberization and the Gig economy. And as public and political debate about the regulation of these technologies continue, multiple scenarios remain possible for the future of work, some more desirable than others.
In this article, I will synthesize the literature to propose key opportunities as well as risks that AI brings for managers seeking to implement self-management practices. I will divide my analysis into the three essential dimensions of organizing: decision-making, coordination, and control. In doing so, I will briefly discuss how each dimension may be affected by AI in the context of self-management.
[💡article pick] What is Shared Governance?
Self-management requires teams of employees to work with a high level of autonomy. When AI tools are successfully integrated, they can importantly improve the basis of how teams make decisions, such as by incorporating rich and diverse sources of data. This can enable workers to draw insights from the complete stock of knowledge of their organization to make more informed decisions. In turn, having the capacity to make more informed decisions can increase the autonomy of teams, thus reducing the need for managerial oversight.
As has been revealed in the last few months, generative AI using Large Language Models (LLM) can be especially powerful for assisting employees. While management research on LLM is still at an infancy stage, many studies are showing how the use of generative AI at work (such as ChatGPT) tends to act as a skill leveler. Raising the performance of less skilled workers to put them at par with higher performers can flatten hierarchies of expertise in a way that encourages every employee to create value.
Furthermore, using generative AI to democratize organizational knowledge can also increase the transparency of key organizational processes, which has been established as a necessary condition for self-management.
While tools such as Holaspirit enable role transparency, using it in combination with LLM software may further facilitate self-management, such as by providing employees with recommended courses of action. By providing personal assistance to employees, the psychological barrier of engaging in self-management becomes lower, thus making it a viable practice for a larger pool of workers and their diverse personality profiles.
[💡article pick] How Does AI Impact Employees Within an Organization?
As we know, it is often easier for smaller organizations to remain fully self-managed over time than it is for large ones. One reason for this is that managerial authority remains a very effective way to integrate complex interdependent tasks between distant organizational units, especially in times of high uncertainty when a lot of mutual adjustments is necessary. In these situations, AI can play a key role in re-engineering workflows to reduce inter-team conflicts, thus enabling large-scale collaboration without the need for managerial authority.
For example, Amazon Mechanical Turk (MTurk), a crowdsourcing marketplace to hire resources, uses AI to reduce coordination cost by allocating tasks, checking for quality, and aggregating different outputs into a cohesive whole. While MTurk functions through a centralized governance system, the mechanisms through which it enables large-scale collaboration may be of great value for self-managing organizations.
In times of crisis, it can be especially difficult for organizations to maintain self-management practices, as old habits of centralized authority often tend to creep back in. And while traditional hierarchies remain an efficient tool for solving intra-organizational conflicts, the need for conflict resolution in times of crisis can be diminished with a smoother and more intelligent integration of tasks, which AI can help achieve.
It is important to note, however, that AI technologies tend to perform especially poorly in times of crisis. This is because learning algorithms can only be trained on data acquired from past occurrences, which often only reflect normal patterns of action. Therefore, as new and unexpected events occur, such as the COVID-19 pandemic, AI models prove to be particularly error prone . Human intelligence remains therefore central for coordinating in times of crisis. When it comes to self-management, it is important to remember that dividing and integrating tasks within organizations often remains an informal process that requires proximity and mutual understanding. While AI can help, it is no panacea.
Organizational control is central for organizations to ensure that the work of departments, teams, and individuals remain aligned with the goals of the organization. In traditional organizations, this is often achieved through top-down and hierarchical control mechanisms.
Because AI can continuously and consistently monitor deviance from key benchmarks, it can be (and already has been) used to provide feedback to employees and teams on their performance to help them achieve their goals. This in turn can help employees and teams self-adjust and make necessary changes when needed.
While this can enable the implementation of less hierarchical control mechanisms through self and peer forms of monitoring, there are also important privacy concerns that need to be addressed. For instance, AI is increasingly used in organizations to increase the scope of surveillance, creating what scholars have coined “algorithmic cages”. One example of this is Uber, where drivers are nudged into compliance through rigid punishment and reward systems enabled by AI algorithms. Such examples are not unique to the Gig economy, as can be seen by the increasing popularity of office surveillance technologies. Such new trends in the labor market poses a significant threat for the growth of self-management, as it is easy to imagine how constant algorithmic surveillance is unconducive for fostering psychological safety.
Therefore, whether AI can be used to promote the growth of localized communities of self-management rather than leading to digital authoritarianism may ultimately depend on whether it can be used in ways that promote the privacy of workers. This depends not only on the choices of tools that managers decide to implement, but also on the ways that these technologies are regulated in the political sphere. While the further development of AI technologies can create empowering conditions for employees, it is important to remember that the way that it will shape the future of work rests on the decisions of managers and policy makers across the globe. The story is still unfolding, and managers have the leading act.