A recent piece in Medium asks the question Why Don’t Recommendations Look At the Bigger Picture? Why aren’t larger sites using cross-category marketing? When a user links a Spotify account to a Facebook account, why doesn’t this lead to Spotify making recommendations based on comments, links, and other information shared on Facebook? As the author explains “What I find galling is not that I have so much information about myself on the web — I put it all out there — but that no one can use it in any meaningful, smart way to serve me good content.”
This point of view is very much in line with the presumptions of targeted marketing – the notion that getting more and better recommendations will benefit users as well as platforms. She brings up the incident where Target inadvertently “outed” a pregnant teen to her father and dismisses its relevance just as quickly. She considers this incident to represent a marketing “failure” by the company rather than looking at the issue of privacy and/or protecting personal data.
Maybe before we complain about companies not commodifying our preferences and profiles *competently*, we should take a minute to consider whether they ought to do it *at all*.
Should all our profiles be amalgamated and mined?
If one links a Spotify account to a Facebook account, surely that provides implicit consent to linking those profiles for mining purposes? Perhaps….but can either organization be sure whether the user fully understood to what they were consenting?
This assumes of course that what is being collected and mined is, in fact, the user’s own information. If others share an Amazon or iTunes account, how might that skew a profile? With whom might that profile be shared? What (erroneous) inferences could be drawn from such a profile, and how might that impact the user? Would we even know about the impacts—let alone their origins—such that errors could be corrected?
Profiles can be generated using information gleaned outside of “authorized” sharing. On 25 March the FTC ruled on the Jerk.com case, a website that presented users with personal profiles of themselves labeled “Jerk” or “Not a Jerk,” purportedly posted by other users. The FTC found that, In fact, information in the profiles was harvested from Facebook and the “jerk” labels were added by site personnel. When profiles are being amalgamated for the purposes of mining, how can we be assured that this kind of falsely “user-generated” information will not be available?
We all love getting a gift or recommendation from someone who just “gets” us (as demonstrated by an astute choice). But…what my mother should “get” about me is not the same as what my lover “gets” about me or what friends “get”. Even within the category of friends, different aspects of myself are emphasized in certain contexts, and “getting” me will vary accordingly. I am NOT homogenous and neither are my preferences or my profiles.
Would it be convenient to have astute recommendations offered to me? Again, perhaps…but I can’t help but feel that it would profit the company making the recommendations far more than it would benefit me. And given that, I’d rather avoid the amalgamation and mining of profiles. This is definitely a situation where the costs and risks seem to outweigh the (negligible) benefits.