Demographics & Postdemographics

Demographics & Postdemographics:

Frequently in social science research, an academic will make use of demographic data in order to understand behaviors of individuals in accordance with perceived aspects of social organization and culture. It is frequently tied to “biopolitics” as often our more observably stable portions of our lives come from the people we were raised around, ethnic culture, and so on.

During the continual design and redesign of social media platforms, user experiences, and user expectations, it can often be much more difficult to consider these aspects due to the fact that these are largely self-reported or manipulated by the user based on the experience of how society is believed to use that data. This often leaves academics in the position to categorize these kinds of data in accordance with their own perceptions. So, naturally, the use of these kinds of data seem mostly exposed as being only probabilistically accurate from existing social media data at best. Thus, the use of a new kind of data might become necessary to to account for historically defined and perceived variations of behavior in users on social media platforms. In fact, these kinds of data might become more clearly understandable as we come to understand one’s identity in relation to particular social design and paradigms. However, it should additionally be urged that the researcher consider the complex relation between the presumption of a demographic and the reasons for claiming that classification or not.

Some of these issues will be discussed in relation to a serious ethical warning. First however, establishing some notion of what kinds of data can be considered “demographic” data will help to understand how this new kind of data might be considered different than prior kinds of demographic information.

Perceived Identity Data:

Researchers occasionally like to make explicit an association of data of the demographic kind as those that are associated with “biopolitics”. One might be aware that “statistics” historically were developed as metrics of the state for governance.1 In that historical vein, ethics discussions could be seen to be considering data on people in some metaphorical way that has been academically whitewashed of political history for the sake of academic objectivity. But one might care that often these categories are affiliated to measures grounded in policy, policing, and methods to find essential commonality between kinds of people of particular legal interest, and that just because in an academic realm one might believe their scientific objectivity is sound, that does not mean that they have priority over the identity and future development and interpretation of these kinds of data, and perhaps never had any such control.

Thus, in an academic culture, this kind of pragmatic use of an essentialist relation of personal attributes to some kind of static personal culture, social scientists are often interested in using demographic data to say whether people are more likely to behave in one way or another, or as a means to hold constant culturally specific nuances in experiencing social phenomena. These are things such as race, ethnicity, age, income, sex, gender, education level, and data that derives from these characteristics.

Notice these kinds of data are usually considered observable things that we have believed were accessible to verify among individuals of a society, as I have said, upon perception. One might believe the fact that online verification of these kinds of demographic aspects are either difficult to perceive, or can be extended upon, a new classification of a new kind of data have come to exist via social media. This data might be said to reduce to, emerge from, or otherwise explain new things about the identity of individuals in different facets of new kinds of perceptions of “society” or “community”. These new kinds of data, mostly related to “interests and tastes”, have been considered postdemographics by Dr. Rodgers. The study of this kind of data are defined “as the study of the data in social networking platforms, and, in particular, how profiling is, or may be, performed.”2

It is argued that this kind of data is unique to prior social, political, and economic phenomena. In particular, Dr. Rogers says on page 153 of Digital Methods that postdemographic data “is intended to stand in contrast to the use of demographics to organize groups, markets, and voters in a sociological sense.” I will try to make clear how I think this separation is near impossible, even if well intended.

Unique Social Media Problems in Use:

Firstly, one might consider Dr. Rogers is only emphasizing the awareness of the difference between kinds of data. To assume this ownership, however, of such a new kind of data would certainly be detrimental to its academic objectivity as soon as a social medium such as Facebook makes it clear that it is designing such data in use to twist people’s votes, track and identify their behaviors with policy and policing goals, and so forth which might manipulate those members of its community to economically or socially favorable conditions for Facebook. This is certainly a consideration of academic interest in a political sense 3 and anyone keeping track of Facebook policy recently would know it occurs. And further, this is certainly in violation of it being “in contrast to the use of demographics to organize groups, markets, and voters…” It would appear it could be precisely that. Rodgers might well have defined postdemographics, but I would hope he is keenly aware of its methodological classification on actual people, and how its used to justify who sees and does not see various kinds of information.

Secondly, it should be clear from my framing of the problem that at best this kind of classification is based in an ontology which is constructed by academics (at best) and the state (at worst). In either case, it cannot be said that these are real or natural kinds of classifications. At least at present, they are mostly constructed qualitative differences that are defined and estimated by observing parties which immediate use for study or judgment, and frequently are organized so that they can be used for both. This is not quite so isolated a classification and nomenclature of subatomic particles. There are often explicit political and economic agendas playing a part in their naming and categorization at the level of scientific grants which can in turn frame the ways in which social scientists chose to motivate their topics, questions, and hypotheses. It would be wrong for a social scientist to claim otherwise, so one should not portray one’s self as being entirely in charge of demographic or postdemographic classification. Yet, we should also consider ourselves at least tacitly morally involved in its social use. It is from the work of such social scientists who use these data which gets cited in policy papers. One must at least be aware of its use beyond one’s own tenure requirements, and be prepared to make changes in the case of its misuse.

It probably should be stated here that I am not in ethical disagreement of the use of demographic or postdemographic data. Certainly issues related to demographics are important for individuals to become more understanding and accurately educated of people not like themselves. These differences are often made aware through the means of demographic data. There must be some kind of way in which one academic can speak to another academic about common classifications of people and their activities in society. This is necessary to a social scientist’s tool kit. Being able to repeat experiments by the use of operational definitions is a necessary facet to accurate and educated social consensus.

So again, it is important that academics choose the use of these kinds of data with care so as to not imply prejudices against false essentialist natures of people not like ourselves. Just because they are necessary does not mean their use at large should be ignored. Academics are a group with a certain amount of privilege. We should take care to exercise our use of data in an ethical manner at all costs and to assume the consequences when we fail. As a social scientist, I am not intending to be a defender of state, industry, and social platform and these institutions’ historic uses of demographic information. I certainly will see the economic and social pressure to believe so, but I must never forget I am here for other reasons. I am here to seek truth and enable a more loving society. If I enable otherwise, I certainly am partially to blame. I would just as easily be at fault if I were not using demographic or postdemographic data. It is important to note that “Black Twitter” is observable and that African Americans are disproportionate users of Twitter, and that this might tell me something about some of these people’s real situations.4 But similarly, I must understand that this information can be used for nefarious reasons.

I should take precautions to avoid such a situation, and be prepared to defend them against any nefarious use that occurs by my interpretation. One might easily develop a machine learning tool which could use semantic choices on Twitter which could stop situations like neo-nazi violence in protests, but just as easily, these kinds of tools could suppress movements such as #MeToo or otherwise that might be considered too politically charged to maintain stately civility.5 One would be naive to think all policing tools are tools for a developers’ own policy choices. As soon as they are conceived, they are also tools which can be manipulated by the state or social media platform for whatever agenda the current administration sees fit. One might think that any individual use or abuse by an administration they disagree with is minimal, but occasionally one must remember it was a group of intelligent academics who designed the atomic bomb for death even if it was a president and his military who used it. Perhaps one could argue that these kinds of weapons are occasionally necessary or that this is completely hyperbolic under extreme circumstances; and perhaps this is a defensible point.

Still, the point is not to say that we should actively be on a path which does not needlessly harm people. The point is that we should be aware that we actively intend to avoid this if at all possible. Academics’ decisions can hurt people. Methods can lead to needless or even counterproductive surveillance. Methods can lead to citizens paying taxes to drug test poor people because of prejudice. Methods can lead to AAA ranked assets causing entire housing market collapse. Methods can be used to justify a state’s reasoning to establish that a race is lesser than another. Academics should be aware of where their methods have limitations and where they should perhaps be, both epistemologically and ethically. And I mean this in relation to the limitations of the method itself, not the use of it in answering a particular whim of interest. We should know that methods and theories in of themselves allow for certain outcomes that exist beyond their initial developer’s intent. It is not always a success to be cited for a use of method in relation to some hypothetical situations. Academics should be more far-sighted than their next grant application.

Build and use tools mindfully.


Image Attribution: Alexander O. Smith

  1. Zande, Johan van der. “Statistik and History in the German Enlightenment.” Journal of the History of Ideas 71, no. 3 (July 21, 2010): 411–32. 

  2. Rogers, Richard. Digital Methods. Cambridge, Massachusetts: The MIT Press, 2013. 

  3. Bratton, Benjamin H. The Stack: On Software and Sovereignty. MIT Press, 2016. 

  4. Murthy, Dhiraj, Alexander Gross, and Alexander Pensavalle. “Urban Social Media Demographics: An Exploration of Twitter Use in Major American Cities.” Journal of Computer-Mediated Communication 21, no. 1 (January 2016): 33–49. 

  5. Burnap, Pete, Omer F. Rana, Nick Avis, Matthew Williams, William Housley, Adam Edwards, Jeffrey Morgan, and Luke Sloan. “Detecting Tension in Online Communities with Computational Twitter Analysis.” Technological Forecasting and Social Change 95 (June 1, 2015): 96–108. 

You might also enjoy