This article is the first part in our series on algorithmic optimization on streaming platforms or simply RSO (for Recommender System Optimization). In this introductory piece, we will cover Music Tomorrow’s suggested approach to algorithmic optimization and present the data tool we’ve developed that allows us to extract and analyze artists’ algorithmic profiles from Spotify. You can foind the second part, showcasing and structuring various RSO tactics and the ways in which artists and their teams can influence streaming recommenders here.

For years now, algorithm-mediated recommendations have been an increasingly important subset of music discovery. According to Luminate's Music360 report, 2020 was the first year when streaming services overtook "family and friends" as the primary channel for music discovery, with over 62% of music listeners in the US naming DSPs among their top discovery sources.

This trend, further confirmed by the Music360 report released the following year, doesn't really come as a surprise. By the 2020s, recommender systems of major DSPs have pretty much realized their potential — with countless AI-driven features deeply embedded into their respective apps, ready to serve users a personalized selection of songs for every moment and every occasion. And it's not just the tech that has matured. Throughout the years, consumers themselves have grown to trust and rely on these recommender systems, making them an integral part of the modern-day music experience. 

A track landing on someone's Discover Weekly brings about more than just a stream. In a way, recommender systems have become some of the most important opinion leaders in the industry. The same way a recommendation from a trusted friend can go a long way — compared to, let's say, an ad you see on Instagram — a recommendation from a "trusted" algorithm brings about a sort of post-social proof. "I've already discovered countless favorite tracks through the platform, so I'm sure that Spotify knows what music I like." Listeners have built trust in prominent music recommenders — especially in high-profile properties like Discover Weekly.

So, music recommenders and listeners' relationships with these systems have evolved significantly in the last decade. Yet, the final component of the streaming ecosystem — artists and their teams — are still pretty much where they were in 2010 regarding algorithmic optimization. Today, most algorithmic strategies follow a kind of "throw spaghetti at a wall and see what sticks" approach: make sure that the metadata you provide to DSPs is as complete and accurate as it can be, and then, well, do your best promoting the release — and hope that the algorithms will pick it up. With some tracks, they will. With others, they won't. 

Or, even worse — left without any actionable insights into the closed system of streaming recommendations, artists and their teams might resort to questionable ways of "hacking" the algorithm, like buying fake streams to pass arbitrary "algorithmic thresholds". All while, in reality, this fraudulent listening activity is more likely to hurt your long-term algorithmic performance instead of boosting it.

More and more, digital advertising campaigns rely on algorithmic support to amplify marketing spending and reach ROI targets. More and more, you hear phrases like "oh, and the algorithms love it!" thrown around A&R meetings. But no one ever really knows why the algorithms love it, and why one track will generate two algorithmic streams for every ad-driven listener while another won't get any traction with the recommender. But hey, these recommenders are a black box, so that's the best we can do, right?

Well, not really. The context is not exactly new: algorithms have been influencing performance of businesses and creators across a wide range of industries for years — if not decades. SEO strategies, built around the idea  of optimizing written content for search engine algorithms to generate organic traffic, are a key subset of marketing operations for businesses of all shapes and sizes. YouTube recommender is notorious for its ability to "make or break" YouTuber's careers — promoting creators to at least follow best practices or even create content specifically designed to thrive on the home page.

In all of these cases, however, businesses and creators have a dedicated support system — a set of optimization instruments, data tools, and best practices helping them formulate and carry out algorithmic optimization strategies. Which is not the case for music — at least, not yet. Today, we’re excited to tell you how we’re aiming to fix that.

Reverse-engineering Spotify's Recommender System

But before we share what we've been working on for the past year or so, let's set the scene a little bit. Imagine you are a label manager walking into the office on Monday, pulling up Spotify for Artists on your priority release, and seeing the following pattern emerge over the weekend:

A sizable spike, attributed to radio and autoplay — but what exactly happened here? How can you explain it, and what's even more important, how can you replicate or support this trend?

Today, there are still no definitive answers to questions like these. To get to a place where you would be able to find answers, you need two things. First, you must understand how Spotify's recommender system works, at least in broad strokes. We've covered this in great detail in our recent article on Spotify recommender and our dedicated course on Music Tomorrow academy, so head over there if you need to refresh your memory. Secondly, you need to be able to uncover how the Spotify recommender currently perceives your project or track — in terms of where and how your music is recommended.

Structuring the Recommendation Landscape 

Let's focus on radio and autoplay for a moment, leaving some of the more specialized and complex algorithmic spaces like Discover Weekly or Release Radar out of scope. When it comes to autoplay, the recommendations will always have a clear point of departure: a song, an artist, a playlist, or a listener queue, which serves as a "seed" for the algorithm. This seed is fed to the recommender, tasked with composing a personalized queue of songs for a specific user, optimized for long-term retention (whether the user continues coming back to autoplay, total time they spend on the platform, etc).

To wrap your head around how you can manage this flow from an artist's perspective, think of the recommendation landscape as a series of candidate pools. A candidate pool is a set of similar assets that Spotify can generate for almost any artist or track on the platform. These pools are based purely on asset representations: the recommender system will combine audio analysis, NLP models, and collaborative filtering approaches to determine which assets are similar to each other (if you're feeling a bit lost here, check out the breakdown of the Spotify's recommender system mentioned above). 

In much simpler terms, candidate pools are a product of the algorithm designed to excel at a single task: "here's a track, now give me a hundred other tracks that are similar to it." Within Spotify's recommendation system, this algorithm is a "shared model" — the candidate pool outputs are passed into various feature-specific models that generate user-facing recommendations. In the context of radio & autoplay, for instance, the recommender will combine the track representations with user representations (i.e., user's context-aware taste profiles) to generate a ranked list of tracks based on the initial candidate pool. This ranked list of tracks is the final output of the recommender — a personalized autoplay queue.

Now, if your goal is to get your song recommended to a specific user, this system is nearly impossible to exploit. However, if your goal is to maximize the radio and autoplay traffic at an artist/track level, things become much more straightforward. To optimize your streaming performance on algorithmic radio, you just need to get the artist's songs onto as many candidate pools as possible, right?

Well, not exactly. Imagine that the Spotify recommender has granted you one wish, that you can use to get your track into any candidate pool of your choosing. The first instinct is to go big: get into the candidate pool for Drake or Harry Styles, so that your song is exposed to as many listeners as possible, right? Well, not really — in this case, the chances are that your favor with the algorithm won't bring anything apart from a short-term gain. Once the algorithm starts to serve your track on Drake's radio and finds out that it performs poorly, the song will end up forever stuck at the bottom of personalized ranked lists and never actually get recommended. That is until it gets taken out of the candidate pool altogether. 

So, a wiser choice is to work with the algorithm rather than trying to abuse the system and choose an artist of a smaller scale whose music would actually be a good fit for your track. That way, the newly created algorithmic connection between the assets will likely generate positive feedback from listeners, becoming a long-term growth driver for your artist instead of "just another random radio spike". So, to optimize an artist's performance on algorithmic radio, you need to get onto as many relevant candidate pools as possible.

Now, the above section is a bit of an oversimplification, of course. In reality, things are much more complex — the candidate pools are constantly in flux, and the final ranking criteria might differ greatly from one algorithmic feature to another. The initial candidate pool might be based on multiple artists, tracks, or genre/style/mood seeds at once instead of a single asset — and it won’t always be clear which seeds are used to generate recommendations. Yet, the overall “candidate pool” logic is likely to hold up — and not just when it comes to radio and autoplay features.

Well, this is all very interesting indeed. But how can I influence that system as a music professional working the artist's career? Well, the first component of any good plan is understanding where you are now and realizing where you're trying to get to. 

Our Recommender System Optimization (RSO) Tool for Spotify

In other words, you need to understand how your music is recommended today and set goals in terms of how your music should be recommended in the future. That means that you have to be able to query into and analyze Spotify's embedding space — for simplicity's sake, think of it as "that candidate pool algorithm." But, of course, there's no way to access that shared model directly. However, if you study both the outputs and inputs of that model, you can get an approximation of it. That was the initial thought behind the project we started about a year ago — a project aiming to finally provide artists and their teams with an insight into the black box of streaming recommendations. 

Extracting the Artist's Current Algorithmic Profile

The basis of our algorithmic analysis tool is a model, processing three primary types of inputs: 

  1. Feature-specific outputs of the recommender system 
  2. Similar artist relationships generated by the platforms 
  3. Artist and track co-occurrences on user-generated playlists used to train the recommender 

Based on this data, we are able to clearly establish algorithmic connections and calculate "algorithmic distances" between artists, playlists, and tracks on the platform. With a target artist serving as a departure point, our algorithmic profiling tool is able to uncover a section of the Spotify embedding space — an "algorithmic map" of the artist's surroundings. Then, we are able to structure that space into clusters by automatically grouping the assets, and define each of these groups in terms of featured artists, genres, playlists, and even audience demographic profiles. These clusters provide insight into how the artist's overall audience is broken down into algorithmic sub-audiences, allowing us to understand where and to whom the artist's music is recommended on Spotify.

Perhaps, it's best to use an example — below, you will find a simplified visual representation of an algorithmic map generated for Joji (in reality, the output of the tool features thousands of nodes, so we have to simplify it quite a bit for clarity’s sake).

A simplified render of Joji's algorithmic profile

Each node on the image above represents an artist, each cloud — an automatically generated cluster of artists (defined based on the most common genre tags inthat group) and each card indicates a notable playlist affiliated with a cluster.

At a glance, Joji's case presents a well-developed and diverse algorithmic profile, made up of 8 distinct clusters, each representing a subset of Joji's algorithmic audience. 6 out of these 8 clusters capture Joji's sonic profile, presenting a complex system of interconnected artists, genres, and styles, namely:

  1. Mainstream Pop Rock (featuring Rex Orange County, Dominic Fike and Wallows)
  2. Indie Garage Pop (featuring boy pablo, Phum Viphurit and The Marías)
  3. Chill R&B (featuring Bruno Major, H.E.R. and Daniel Caesar)
  4. Lo-Fi Hip-Hop (featuring Verzache, Peachy! and khai dreams)
  5. Emo Rap (featuring Aries, ericdoa and Lund)
  6. Alternative Hip-Hop (featuring Yeek, Kevin Abstract and BROCKHAMPTON)

Together, these clusters do a pretty good job representing Joji's eclectic musical style: mixing the elements of lo-fi hip-hop and soulful, melancholic R&B of his debut with a more pop-y and mainstream-friendly sound of his sophomore release, Nectar. However, the results of our analysis suggest that none of these six sonic clusters can actually be considered at the core of Joji's algorithmic profile.

The first core cluster for Joji revolves around 88rising — a hybrid label/management company representing the artist. By fostering meaningful connections between the artists on its roster through collaborations and shared properties (like 88rising playlists and compilations), 88rising was able to construct not only a cultural but also an algorithmic space around its roster.

Having experimented with the model quite a bit at this point, we can definitively say that this is a rare algorithmic phenomena — especially for the catalogs of this scale. Such clusters are much more common for developing artists with small but highly engaged audiences, usually operating in siloed genre niches. In other words, a collective-centric cluster is what you expect from a Swedish black metal band — not a triple-A artist with a couple of platinum-certified singles behind its belt.

In business terms, the existence of the 88rising algorithmic cluster means one simple but potentially very impactful thing. A user actively listening to one of the artists on an 88rising roster — an established artist like Joji or Rich Brian — is likely to be exposed to other 88rising artists through algorithmic recommendations. And given the scale of some of the 88rising artists, this can be a substantial organic promotion channel for developing artists on the company's roster.

The second core cluster we find is also somewhat of a "meta-cluster" — structured around a promotion vehicle rather than a specific sound. Through Joji's connection to Pink Guy (Joji's side project), bbno$, Hobo Johnson, and Oliver Tree, we can claim that a large subset of Joji's algorithmic profile is still connected to the meta-cluster of "Viral Rap". This cluster is not homogeneous in terms of the artists' sonic profiles — encompassing a wide variety of sounds from aggressive, industrial electropunk of Death Grips to quirky, mainstream-ready hip-hop of bbno$.

Instead, what unites all these projects is their viral nature: nearly all artists in the cluster are staples of internet culture and heroes of countless memes — whether that was their deliberate promotion strategy or an organic, truly viral success story.


Joji's retired internet persona of FilthyFrank is probably not a secret to anyone — yet, the breakout success of the project back in 2017-18 would suggest that Joji has wholly overshadowed his past comedy career, establishing himself as a successful pop artist in its own right. Yet, his algorithmic profile is still clearly tied to his YouTube career as FilthyFrankTV. This take is further supported by the results of track affiliation analysis — we found that nearly all of Joji's top tracks are still primarily attributed to the two clusters described above: viral rap and 88rising. Out of all of Joji's catalog, only two of the most popular tracks — SLOW DANCING IN THE DARK and YEAH RIGHT — bust out of this pattern, establishing a strong connection to either of the six sonic clusters.

Now, we could dig much further into Joji's algorithmic profile — studying the composition of the profile in terms of specific artist connections or zooming in on any of the six sonic clusters. Yet, even the limited profile analysis presented above allows us to highlight some of the most impactful algorithmic phenomena affecting Joji's performance. So, with the project closing in on the new release cycle with the upcoming drop of SMITHEREENS, try to ask yourself: considering what you've just learned about Joji's algorithmic profile, how would you approach the algorithmic optimization of the new record? Perhaps you could keep this example in mind as you follow along with the rest of our series on streaming RSO.

So, the output of the RSO model allows us to get a clear snapshot of the current algorithmic profile for pretty much any artist on Spotify. Yet, profile assessment is just the first step in the RSO process — similar to the initial keyword research you would undertake when formulating an SEO strategy. Once you've run that initial assessment, it's time to move to the next step: setting strategic targets and prioritizing the extensive list of potential keywords to source the content strategy. Or, in our case, prioritizing candidate pools and setting targets in terms of where and when you want your music to be recommended.

Setting Algorithmic Targets

Recommender System Optimization — or RSO for short — can be applied at three different levels of assets: 

  • at the track level, optimizing the performance of a single upcoming release, 
  • at the artist level, optimizing an artist's overall algorithmic profile and catalog, including past and upcoming releases,
  • at the label/roster level, optimizing a catalog spanning multiple artists and projects. 

For now, let's start with the basics and focus on a single-release optimization.

The first step towards formulating a release optimization strategy is to set a clear target: an algorithmic positioning you'd seek to develop for the upcoming release. Generally speaking, this target algorithmic positioning should answer the following three core questions:

  1. What audiences should the music be recommended to? What genres, artists, tracks, or playlists do the potential listeners currently engage with? Should you prioritize hardcore fans of a particular style/genre or "generalists" with eclectic music profiles? Is your music more fit for "mainstream" audiences that prefer popular music or music aficionados searching for deep cuts and hidden gems? And, again, think candidate pools: what is the single high-priority candidate pool you want to feature the upcoming release? Which surrounding candidate pools could you leverage to get there? But remember, the goal here is to work with the algorithm, not against it. So you should also assess how realistic the target positioning is, considering how competitive these target candidate pools are and how close they are to the artist's current algorithmic profile.
  1. What recommendation context would fit the release? For example, should the strategy prioritize laidback autoplay features or high-engagement discovery-centric algorithmic playlists? Should the song be recommended to users looking for a high-energy soundtrack to get them going in the morning — or should it be served on a lazy Sunday afternoon instead? 
  1. How would you translate the target positioning into predetermined metadata inputs available through your distributor or platform-specific interfaces like Spotify for Artists pitch form? What is a single genre tag central to your target positioning? What mood and style tags describe the desired recommendation context? 

In other words, the target of an algorithmic strategy can be generally set in terms of "we want the new single to be recommended to casual fans of genre X, who listen to artists A, B and C, when they are looking for a calm, soft background music to help them relax after a long day." Once that target positioning is set, it's time to move on to the most exciting part — turning that target into reality. 

But, I guess this is where I will have to leave you hanging. The article you've just read is crossing into the 15-minute reading time territory, and we're only halfway there at best — so we've decided to cut it down into a multipart series on streaming RSO. The next part, showcasing various tactics/levers you could employ to influence streaming recommenders is available here.