Research papers, March III 2021

Edvard Kardelj Jr.
Letters on Liberty
Published in
10 min readMar 20, 2021

--

Research papers on: Crisis Management, Leadership, Gamification

Photo by 🇸🇮 Janko Ferlič on Unsplash

Myopic voters and Natural Disaster Policy, by A. Healy and N. Malhotra

ABSTRACT: Do voters effectively hold elected officials accountable for policy decisions? Using data on natural disasters, government spending, and election returns, we show that voters reward the incumbent presidential party for delivering disaster relief spending, but not for investing in disaster preparedness spending. These inconsistencies distort the incentives of public officials, leading the government to underinvest in disaster preparedness, thereby causing substantial public welfare losses. We estimate that $1 spent on preparedness is worth about $15 in terms of the future damage it mitigates.

Our results show that voters significantly reward disaster relief spending, holding the incumbent presidential party accountable for actions taken after a
disaster. In contrast, voters show no response at all, on average, to preparedness spending, even though investing in preparedness produces a large social benefit.
We estimate that the average $1 spent on disaster preparedness reduces future disaster damage by more than $7 in a single election cycle, and that the total value of a dollar of preparedness spending for all future damage reduction is about $15.

Our central finding is that voters offer scant incentive to presidents to pursue cost-effective preparedness spending, but do encourage them to send in the cavalry after damage has been done and lives have been lost.
It is difficult to say how exactly preparedness and relief spending should be optimally balanced, but the evidence strongly suggests that the manner in which voters currently incentivize politicians, with essentially all
weight on the latter, is anything close to optimal.

Government responding to the incentives implied by our results will underinvest in natural disaster preparedness. The inability of voters to effectively hold government accountable thus appears to contribute to
significant inefficiencies in government spending because the results show that preparedness spending substantially reduces future disaster damage.

When voters provide their elected officials with incentives to make mistakes
ranging from insufficient investment in natural disaster preparedness to perhaps excessive attention to airline security, elected officials are likely to provide the inefficient policies that voters implicitly reward.

From problems to progress: A dialogue on prevailing issues in leadership research, by S. Ashford and S. Sitkin

Abstract: This paper presents a dialogue between two scholars who have come to contribute to the leadership literature rather late in their careers and, as such, embody a combined insider/outsider perspective. From this perspective, they raise and discuss various observations about the current state of the leadership literature and where that literature might profitably go in the future. The hope is that this dialogue will stimulate other dialogues and, ultimately, foster progress in the leadership literature.

Leadership is an activity, not a position.
Second, we like to elevate the importance of our work. Sometimes it is fairly silly. When I entered the field, we stopped calling people in such positions “supervisors,” we called them “managers,” as if by a simple label change we could enhance the importance of our research. Now we call those same people, our various samples of people holding a formal position of authority, “leaders,” thereby enhancing their importance yet again (and beyond recognition?). By making this equation, we obfuscate what leadership really is. More importantly, by making this equation, we literally do not see leadership that happens in other places (from below, laterally, etc.).

“The notion that leaders can be identified by their location in a hierarchy strikes me as lacking even simple face validity. Occupying or being appointed to a supervisory or managerial position doesn’t magically make one a leader.”

By democratizing the construct (making it an identity that anyone can claim and be granted (DeRue & Ashford, 2010), we may strip some of it’s prestige,
but we may also discover one way that organizations can be more effective
and responsive to an increasingly complex and fast-changing world — by cultivating more leadership in more places.

“Authority” (which I define as having a position that controls certain resources, such as money or jobs or assignments, and has the power to allocate them) is often mislabeled as “leadership.

Management involves influence via systems. Managing involves designing and operating incentive systems, job design and task allocation, communication systems and information flows — in other words actions to forge and ‘manage’ formal (e.g., structure) and informal (e.g., culture) systems.

Leadership also involves influence but it is influence of one person directly
on another person, or indirectly through intermediary individuals. Leadership is an act of social influence aimed at clarifying where a collective is going and motivating others to help get there.

Exciting because of research that is going up a level or down a level. Going “up a level” would shift us from our historical focus on the individual him or herself to examining, say, patterns of leadership at the team level and/or the interplay of formal and informal leadership within teams, departments, and organizations.
Going “down a level” would mean that rather than starting with the assumption that someone either is a leader or isn’t a leader, we would acknowledge that all people have moments when they behave in a more or less leader-like way and examine within-individual variance in leadership behavior. My colleague here at Michigan, Bob Quinn (2005), names those moments “the fundamental state of leadership.”

What does leadership look like when done by NDLs? Do the same behaviors, when done by a designated leader have the same effect when it is done by someone not in such a role? I call this phenomenon the “bringing a pizza” issue. If a DL brings a pizza for a hard-working team, traditional leadership studies would code that as an act of people oriented leadership, as “consideration.” If a team member were to do the same behavior (say in a leaderless group), would it be similarly coded? Would the group see it as leadership at all? In addition to what we will study, we likely need to think carefully about methods for these more complex questions.

I think the distinction between a “designated leader” versus “non-designated
leader” is a really useful one. It clarifies that the important distinction is about whether there is an official position of authority involved, a job title, or a consensual institutionalized acceptance of a role. In my view, one can exercise influence as a leader where an outside observer might label you as exercising leadership, but it can occur without explicit certification or even recognition by the group you are leading or you yourself.

If I don’t see myself that way but others do see me as a leader, should we consider me a leader? If I am exhibiting all of the requisite leader behaviors and am having real, measureable influence on those around me, but they do not label me as a leader due to gender biased perceptions of what qualifies as a leader, am I then a leader?

If leadership is not about hierarchy or leading down only, then it allows us to examine leadership as a set of distinct behaviors with distinct effects. For example, if one of the tasks of leadership is to try to help forge collections of individuals into well-functioning communities, then bringing a pizza could well be a leadership behavior regardless of who does it. That said, to paraphrase a quotation often attributed to Sigmund Freud, sometimes a pizza is just a pizza. But not always. And we as leadership scholars need to be able to tell the difference.

Kuhn noted that progress is only made when a generation of scholars is replaced by a new generation, in that individuals do not give up their pet theories and methodologies but eventually give way only when they stop controlling the journals, training programs, and reward systems.

Taylorism 2.0: Gamification, Scientific Management and the Capitalist Appropriation of Play, by J. Dewinter, C. Kocurek and R. Nichols

Abstract: By making work seem more like leisure time, gamification and corporate training games serve as a mechanism for solving a range of problems and, significantly, of increasing productivity. This piece examines the implications of gamification as a means of productivity gains that extend Frederick Winslow Taylor’s principles of scientific management, or Taylorism. Relying on measurement and observation as a mechanism to collapse the domains of labour and leisure for the benefit of businesses (rather than for the benefit or fulfilment of workers), gamification potentially subjugates all time into productive time, even as business leaders use games to mask all labour as something to be enjoyed. In so doing, this study argues, the agency of individuals — whether worker or player — becomes subject to the rationalized nature of production. This rationalization changes the nature of play, making it a duty rather than a choice, a routine rather than a process of exploration. Taken too far or used unthinkingly, it renders Huizinga’s magic circle into one more regulated office cubicle.

In this article, we contend that computer games superficially look and act as a type of scientific management as advocated by Frederick Winslow Taylor; 1 however, because of the computerized medium itself, gamified training serves as an expansion of scientific management into new spaces while effacing the politics of class and access in the workforce. This engagement dangerously collapses the domains of labour and leisure by combining the domains of play space and the real world.

‘The problem with institutionalizing alternative realities in art or in games is that they become co-opted by the system, subordinated to the prevailing world view’.

While businesses have always included games of sorts — sales competitions, playing the market and ropes courses — games under a Taylorist model shift the competitions from how well you sell to the specific mechanics of selling, micromanaging producers and consumers on an unprecedented scale. Indeed, what we find most disturbing here is not just that Taylorism as gamification extends micromanagement to incorporate the practices of leisure time, attempting to make work seem like fun (even when it’s not inherently, like counting the number of olives allowed on a Subway sandwich), but also that it opens the potential to force leisure time to become productive, whether in relation to one’s own work or as an extension of some outside agent’s need for production.

The intertwining of games and work, then, suggests explicitly that work should be more like play but implicitly seeks to make play into productive work via games.

Taylorism (or scientific management), after all, was envisioned as a means of making labour time more productive. Introduced by Frederick Winslow Taylor in his 1911 work The Principles of Scientific Management, the processes were designed to address inefficiencies in production systems by breaking them down into component actions, which could be perfected through measurement. Under this system, each discrete action in a production process — each move of the body, each turn of the screw — would be optimized to maximize productivity.

Work can be made to seem more like play and, so, potentially more productive through enjoyment, while leisure time can be made productive by turning leisure habits into usable data for production.

In response to the For the Win symposium, Bogost argues that businesses are taking the mysterious power of games and leveraging that power in a poor imitation for sales and marketing. The focus on points and levels prevalent in many gamification approaches rarely accounts for complexity, behaviour or community.

Games, according to Caillois (1961), are more restricted than play, adhering to strict ludic rules. To learn to play a game well is to learn the rules of the game and perform well within those rules.

If a player fails at a training game, then the player is at fault — she does not understand the content or she has a bad play style — it’s not the game’s fault. Thus, in the logic of gamification, the simulation stands in for the values of work; therefore, this approach suggests that if a worker fails at work, it’s the worker’s fault rather than corporate responsibility.

While this rationalization of player/worker performance can effectively identify efficient processes, it can do so only in a general way, which is to say that such analysis of work tends to identify averages and generalities; rationalization can identify the most efficient process for average employees, but it cannot identify the most efficient process for each employee. This is a distinction that, in highly regimented work environments, can result in employees being forced to complete tasks following the approved processes rather than the processes that may work best based on their particular abilities or skills and results in the hampering of worker-driven innovation.

People alter their behaviour when they are observed or believe they are observed, a problem that has long dogged both managers and scholars and inspired both Jeremy Bentham’s now-infamous Panopticon (Foucault 1975) and George Orwell’s 1984 (1950)

One of the more troubling aspects of gamification deployed as a form of scientific management is the extent to which it can facilitate the collapse of values between play and work and player and worker. In this collapse, the processes and pleasures of work and play are not only entangled, but in fact become — or at least shift towards being — indistinguishable

Ruggill and McAllister add, ‘[c]omputer gameplay is capable of producing both wealth and goods — not to mention different kinds of knowledge (e.g., spatial, ludic, problem-solving and so on) — and thus seems as if it is inescapably work, perhaps even labour’ (2011: 91). This collapse of work and play into games is important to this critique precisely because of the theories and critiques of scientific management. Consider Taylor’s maxim, written in 1911: ‘[i]n the past the man has been first; in the future the system must be first’ (2006: 7). This is perfected in the computer-as-training game.

More problematic for workers, of course, is the question of what
happens when gamified processes succeed and actually result in something so
fun that work can creep into leisure time. In such moments, unless workers
are compensated for the time spent, companies experience what is essentially
a free boost to production. The more successfully fun the game, the larger the
uncompensated productive boost.

We are not naïve enough to believe that critiques of gamification will stop future gamified projects. Yet we would argue for an approach that is less game and more play. To play is to innovate, to frame shift (as the linguists would say) by asking players to navigate their many subjectivities by layering them (e.g. being a mother, an elf, a worker and a citizen) or by hybridizing them in a flow state. Play requires imagination and rules. But differently from games, within the rules of play the imagination and rules are plastic, changing to situational exigencies that are defined by actors, culture, materials and ethics.

Further, if surveillance data must be used (and we know that it will continue to be used in our surveillance society), then that data can be used to critique the system at large, not just the ‘cogs in the machine’.

The iteration and training, then, should not focus on the individual, and it should not focus on standard work; it should focus on systems of labour that enable ‘quality of life’ to be the value-added metric of success.

--

--