Let's Discuss Your Project

New call-to-action

Insights

Music, According to ChatGPT

This piece originally appeared in our biweekly newsletter, Sound Signal, which identifies emerging artists, scenes, and trending tracks, crafted by the world's best writers and curators. Sign up here to never miss our take on what's next in music.

Since its July 2023 launch, the buzz around Suno has grown among tech heads, social media users, and songwriters alike. An emerging AI music generator that produces professional-grade songs, Suno uses its own AI models to create the music and relies on Open AI’s ChatGPT to create original lyrics and song titles. A free account on the platform’s website can generate a unique song in less than a minute.

Eclectic and shockingly realistic, song creations from the ChatGPT-based tool teeter between being deeply inspired to downright comical. Suno’s top trending creation, “bean soup” is a hilarious interpretation of, well, a bean soup recipe. At 1320 likes on Suno, the nearly three-minute song finds an electronic male voice singing the dish’s ingredients and measurements over a gaudy ‘00s metal rock production that sounds like it was written for a Saturday Night Live sketch. Another trending creation, “You’re destroying me” (869 likes) aims at social media’s instant-gratifying culture. The southern R&B-flavored song features a female-generated voice soulfully singing her social media ills with the pain of a heartbroken lover, complete with background vocals and verse-chorus structure. 

But the most unbelievable Suno creation of all has come from Rolling Stone, which recently published a highly detailed feature story on the rise of Suno and the company’s vision behind it. When Rolling Stone writer Brian Hiatt requested a “solo acoustic Mississippi Delta blues about a sad AI” on Suno, the result was a brooding 1-minute Delta blues track, “Soul of the Machine,” evoking a vintage Robert Johnson recording from the 1930s based around a hopeless AI. After the track was shared on Soundcloud, many commentators were shocked by how authentic its music sounded and wondered how generative AI music tools like Suno would factor into a traditional songcraft process. 

No matter what side of the fence people are on about emerging AI technologies, the rise of AI music generators is sparking curiosity about the future of music-making on all fronts. Suno is blurring the lines between language, culture, genre, and music creation by enabling people to execute ideas without actual skill.

If you're seeking deeper insights on music industry trends, we'd love to connect with you. Contact us to schedule a meeting with our Music Intelligence team.

Third Bridge Creative and AI: Our Guiding Principles

Over the past year and a half, artificial intelligence has been the focal point of nearly every conversation about the future of labor. Corporations and startups are spending billions of dollars on AI as people project their hopes and fears onto it, setting it up as a proxy for attitudes towards culture, capitalism, and technology. With all the noise, it’s been difficult for many in the creative space to understand the real-world impact of these new innovations on our work and lives. Third Bridge Creative is engaging with AI in the hopes of transforming the tool sets we provide to our workforce and creating a new line of services and products that we offer to our clients. We are approaching AI in a way that is ethical and fair to our contributors and community, improves our work, and helps us better articulate our perspectives on culture.

As we begin this journey, we’ve spent time talking with different individuals inside and outside our company to understand how we fit into this landscape, and how we can do so in a way that is fair and transparent to our community, collaborators, and clients. Below are the basic principles that we will adhere to as we continue to navigate this space. 

Our north star will always be figuring out how these new technologies improve the work of our creatives, not replace them.

We believe that at its root, culture is a fundamentally human endeavor. This is true of the people who create it, and equally true of those who shape and engage with the conversation around it. Regardless of how we use AI to help us execute our work, critical creative decisions will always be made by humans. This will be true of not only our final products, but also inform how we use the technology in our work itself: which data sets we identify and curate, how we structure our prompts, and the types of software and tools we use.

We’ll have to think and work differently by privileging innovation and experimentation.

The landscape for creatives has changed many times since the advent of the World Wide Web over thirty years ago. Each time, creative practitioners adapted and learned new ways to use their knowledge and skills to make a living from their craft. We are currently in one of these transitional periods. In order to survive and grow – both as a business and as individual creatives – we’ll have to change the way we work. This will require us to try new things. We’ll have to expend resources, both in terms of monetary investments and time. We’ll place smart bets, but it’s inevitable that while some of our efforts will yield valuable results, others won’t. What’s important is that we try out new ideas and learn from them. 

Used properly, artificial intelligence can enhance our work.

At its core, AI is a mechanism that powers a new suite of tools. There are many ways that we could use it help us execute our work on a technical level: performing administrative, rote tasks such as data entry; providing input into strictly non-creative tasks such as language translation or quality assurance; aiding in the creative process by developing first drafts of articles, blurbs, or playlists, or contrasting different stylistic approaches; or enhancing the quality or the accuracy of the data that we use to inform our decisions around curation or assignment allocation.

There are also gaps in AI's execution that can't be filled by a machine.

The heart of our work engages with culture and art – through listening, watching, and taking in information, we interpret these signals based on our values and perspective, and synthesize these ideas using technical skills that we’ve honed over many years. AI is good at ingesting and synthesizing data, but the interpretative and critical thinking that informs our decisions is essential to our processes and the quality of our work, and allows us to see and work against any biases that AI algorithms might be pulling from. It also leaves space for original thought that pushes the culture forward.

Our work and our ideas have value, and we will provide attribution and compensation for them.

Conversations around culture are collaborative and iterative. Threads overlap and ideas build off one another, but our knowledge and understanding of culture is based on the labor of specific individuals – those who have written and created reviews, features, videos, Wikipedia entries, blogs, and social posts on the internet for the past thirty years. As we puzzle through how to use these tools, we’ll seek to acknowledge that labor, and use tools, build models, and develop products that fairly compensate these individuals.

We will use artificial intelligence in a  transparent manner with both our contributors and our clients.

We will let our community – employees, contributors and clients – know how we are using this technology, and how it impacts our work. We will never use any contributor’s work in learning language models without their knowledge and consent. We will never use AI to develop an end-product without explicitly informing our client.

We want to continue to have these important conversations within our community. If you have thoughts, reservations, or want to bring up other considerations in regards to how we might best approach AI, please don’t hesitate to reach out to us directly: hello@thirdbridgecreative.com

Music Curation’s Sweet Spot

At Third Bridge Creative, a significant portion of our work in the music space falls under the umbrella of curation, and the core of this service offering is our team of music experts. We have relationships with hundreds—if not thousands—of people who are extremely knowledgeable about music across genres, style, geography, and era, and they also have exceptionally good taste in their areas of expertise. These are writers, DJs, and musicians, mostly, and for TBC they bring their knowledge and taste to bear on a wide range of curation projects. In some cases, data plays a role in guiding part of the decision-making. But outside that, how do curators make decisions? What mindset do they have to adopt to select the right tracks for a project? 

We start from a creative brief, where the client has documented the guidelines for the assignment. As we begin our work, the curator's primary task is to balance three distinct imperatives. 

Knowledge

First, they should draw on their own knowledge of what artists and songs belong in the scope of what the brief describes. This requires that they understand the parameters of the concept, and have a deep and broad understanding of the catalog of music that it encompasses. 

Audience

Second, the curator should take into account an imagined listener or viewer, and what they might wish to hear in the context of the assignment. 

Client priorities

The brief should go a long way toward conveying this third pillar. Priorities may include music the client is highlighting, artists they might be featuring, or content they'd rather not include in the assignment.

Instincts and knowledge

While we undertake a wide variety of curation projects, from music supervision for software applications to metadata hygiene, for simplicity let’s focus on a playlist on a streaming DSP. The curator’s instincts and knowledge inform selections via their assessment of relevance, their understanding in terms of classification (genre, style, era), and their judgment regarding what’s most important

For instance, the curator may know that a given artist in a certain sphere is the most easily recognizable or widely known representative of a given sphere, but that there are plenty of opportunities to include others that either never got the same wide recognition, or their star faded more quickly for one reason or another. There are all sorts of reasons why songs get (and stay) big, and familiarity plays a part. Top 40 stations have long known that if you play a song 10 times a day, a significant percentage of the audience will come to expect it, and may even enjoy it. And if you extend this phenomenon across decades, some songs are big simply because of inertia; just because it's a frequently played and widely recognized song doesn’t mean that in its heyday, it was the only interesting thing going. 

Several years ago, Billboard assembled a list of the “Greatest of All Time Hot 100 Songs,” based on each song's performance on the Hot 100 charts starting in 1958. The No. 1 choice? The Weeknd’s “Blinding Lights.” And No. 2 was Chubby Checker’s “The Twist.” Seeing those tracks adjacent to one another is confusing—in terms of artist, era, style, and content, they are very far apart. 

But it’s also instructive to think of them in terms of taste and, for lack of a better word, zeitgeist. Songs don’t get any more emblematic of an era than “The Twist,” but the number of people who currently wish to hear the song over and over again is probably relatively small. Similar judgments are made by curators in terms of which tracks fit where—are they truly of a given genre? Is their music essential to that genre?—and whether they matter in the context of their time. Another interesting example is rapper MF DOOM, who was important during his aughts prime but did not at the time sit at the center of the hip-hop conversation. He's become more influential following his death and his songs have gathered a great deal of momentum, but those tracks might sound out of place on a nostalgic playlist of his contemporaries, since he wasn’t operating in the same sphere during the era. Such distinctions are what separate hand-curated playlists from algorithm-driven ones. 

Audience

The curator also needs to put themselves in the place of the imagined listener, using their own best judgment to think through what this person might want and expect from a given playlist. The ability of the curator to remember that they are not the (only) customer and to think broadly—to have their own tastes and priorities, but to square those with their understanding of the tastes and priorities of others—is crucial to music curation projects. 

In other words, the curator's own considerations of relevance, classification, and importance are still there, but they weigh them against their understanding of the average listener’s tastes, knowledge, and expectations. Is this audience filled with music people, obsessives who will understand the connections the curator might be reflexively making? Or are these music generalists who will above all appreciate a playlist that includes some songs they affectionately recognize? 

Client priorities

While the interaction between the curator’s taste and knowledge and those of the audience is the most important nexus in curation, the choices need to be filtered through the client’s priorities, which, in some cases, sit outside the criteria outlined above. 

To take an example with obvious cultural resonance, if there is public controversy around a particular artist, their otherwise canonical music might be a poor choice for inclusion based on the client’s values. Or there might be something else happening on the platform that impacts potential track selections, such as a marketing initiative that's guiding some of the client's thinking. These needs—cultural sensitivity, internal promotions—form a third pillar of music curation for platforms.

The most skilled curators have an instinct for how to best balance these sometimes competing needs. Doing so requires an understanding of music, the audience, and the platform, which leads to putting the right song in the right place at the right time.

The Data Forensics of a Viral Trend

In the ever-fluctuating attention economy—full of new and competing DSPs, rising and dying social media apps, and a proliferation of short-form videos—an interesting phenomenon has been occurring. It seems like every week, a crop of Gen-Z content creators are unearthing ‘80s indie darlings, ‘00s R&B, and shoegazey rock b-sides, causing these catalog finds to explode in interest among young, internet-savvy consumers. It's a kind of music discovery process that would be viewed as typical if it didn’t lead to such calculable mainstream success. 

Of course, not every artist’s back catalog is destined for a wider audience, and these trending tracks often disappear back into the past as quickly as they emerged into the present. But there are ways to predict what songs will flourish. Programmatic data combined with the contextual expertise of music journalists—aka music intelligence—can provide plenty of answers when you're seeking to pinpoint the songs striking that nostalgic sweet spot that will grab potential listeners for a project, or to surface an artist whose older music warrants a second go. 

Use data as a guidepost

A quantitative analysis is usually a good starting point when looking for a potential sleeper hit, but while it may help guide some initial hunches, it shouldn’t always be taken at face value. Human intuition is the best complement to clear, stratified data.

Using TBC’s proprietary data tool, we've pulled two examples of some notable catalog tracks—i.e., any song that is over five years old—to highlight. These numbers represent what our analysts have determined to be the most important data points when seeking to identify a breaking track, such as the numbers of TikTok videos and Spotify editorial playlists. Since many catalog tracks have gotten a foothold in TikTok videos in the past, we can start by looking there. The reigning track in this category looks to be Justine Skye’s “Collide (feat. Tyga),” which was a minor hit in 2014 but gained a new life on TikTok in late 2022, and was certified gold by the RIAA in March 2023. With almost 10M videos of the track on TikTok compared to the 2M videos of 2015's “Makeba,” it at first appears that this is the song worth paying attention to. 

But a few other data points tell a slightly different story. French pop singer Jain’s “Makeba” has a much higher TikTok video count from the past 30 days: 1.8M compared to the 400K of “Collide.” Some notable editorial playlists have also featured "Makeba"—Spotify’s Viral 50 USA and Viral 50 Global—and it has been featured on 59 playlists overall. This is the sign of a viral catalog track that is currently trending, and as such, it’s wise to try to capitalize on it as soon as possible.

Look for converging factors

Identifying a catalog track that is spiking in popularity is step one. Step two is making sure it’ll be viable to use for the audience in question. A wider consumer base will likely include listeners who may not be as internet-cultured as the song’s earliest adopters. Here’s where the journalistic sensibilities in a music intelligence approach come most in handy: Doing some research on “Makeba” can provide a better picture of its current popularity, and if it has the potential to sustain that attention. 

The 2015 French song, named after South African civil rights activist Miriam Makeba, had a few previous waves of fame before its most recent TikTok comeback. In 2017, “Makeba” peaked at No. 8 on the French Singles Chart, and its music video was nominated for a Grammy award. Then, in 2018, the song featured in a Clio-award-winning Levi’s commercial, reaching the top of Billboard’s TV Commercials chart in February of that year. The track clearly has a way of striking a chord with audiences (and, interestingly, was actually used in some of TikTok’s earliest months, stretching all the way back to 2017!).

Digging into the current popularity of “Makeba” produces some interesting insights as well. While the song began trending on TikTok around May with a viral dance trend, Jain can thank the song’s explosive growth to a June TikTok post that showed Bill Hader dancing to the song in a cut-for-time SNL sketch. Hader’s popularity exists well outside this TikTok trend, of course: In addition to his eight seasons on SNL, the comedian starred in the four seasons of the critically acclaimed HBO comedy Barry; its last season hit a ratings high and ended in May 2023. His goofy moves also had a life of their own; a now-defunct Twitter account from 2019 would post videos of Hader dancing to various popular songs on the platform.

Examine the changes in context

All of these underlying threads—data, history, and context—manage to converge on one viral catalog hit, and prove that a track might be worth more than just its nostalgic factor. For purposes of marketing, it's worthwhile to examine how the track is currently being received versus earlier in its lifespan. Since it was named after an activist, “Makeba” had an original spirit that was about embracing diversity and communal gatherings—Levi’s 2018 commercial followed that idea, showing people around the world dancing to the song together. But in 2023, the song has spread through TikTok with its original context stripped to be a simple dance trend, or as the soundtrack to a relatable thought. 

This is another reminder that viral moments shouldn’t be analyzed in a bubble, but as a reflection of a passage in time; trends ebb and flow, and popularity isn’t always one straight rise to the top. But a confluence of data and key intellectual insights can help map out the moments leading up to the moment, and what's likely to happen next.  

TBC covered more data specifics of “Makeba” in an installment of Sound Signal, our biweekly newsletter that highlights music’s hottest emerging artists and tracks. You can read more in our roundup on Chartmetric, and subscribe to Sound Signal so you never miss another viral trend.

How to Identify the Next Big Artist

Every day, programmers at major streaming services, marketing professionals looking for artist alignment, and A&R leads trying to sign the next big star are confronted with hundreds of thousands of artists trying to grab their attention. Most likely, they already have a couple of techniques and tools for weeding out the duds, but each source can be its own beast, with overwhelming amounts of data to sort through and little framework for contextualizing it. And of course, the music industry is changing so rapidly that an artist can easily lose momentum as quickly as they gained it. Buzz is ephemeral, and while instincts are important, they can't fill in every blank. 

That’s where music intelligence comes in. It's an approach that leverages an analytics system that pinpoints the most important morsels of data, and pairs them with human insights from writers and journalists who have wide-ranging expertise in specialized genres, scenes, and territories. With a combination of quantitative data and qualitative knowledge, it’s possible to identify artists who are only just breaking out. 

Understanding the data

A glance at one of Third Bridge Creative's intelligence tools reveals a wealth of information to make informed decisions about artists who warrant attention. The bar graph at the top presents a first snapshot. The bars themselves share the numbers of artists who are enjoying some level of growth and sorts them by career stage (undiscovered, mid-level, or developing); clicking a colored section of any of the bars filters the results below to display only those who meet that qualification. This classification system—devised by TBC's collaborators at Chartmetric—captures how quickly an artist is reaching broader recognition. In addition, TBC's algorithmically driven Artist Score aggregates various metrics and reflects the overall relevancy of an artist for any particular project, tightening the focus on the artists that matter most. 

The playlisting, short-form video, and charting scores help explain why a given artist is gaining momentum. The playlisting score looks at numerous trendsetter and emerging artists playlists, capturing those who make noteworthy appearances. The short-form video score indicates an artist’s prominence on TikTok or YouTube Shorts, while the charting score reflects their presence in trending charts on major DSPs.

The list of artists now consists only of potential trendsetters who are undiscovered or developing and have some form of trending growth. Adding the TBC Score makes it clear that these artists deserve attention. With the data sorted, it’s time to look a little closer.

Expertise pays off

This type of tooling and data-sorting is invaluable for programmers and curators, and can help them quickly focus on a small pool of viable candidates for their campaigns, playlists, or stations. But it still requires the intuition and research of a trained analyst to understand how to interpret the information and make key decisions. 

In a recent example, the artist known as Maeta had a high TBC Artist Score and a decent Cross Platform Performance (CPP) Velocity of around 50%. The sultry R&B singer has worked with stars like SZA, Ty Dolla $ign, and Kehlani, and she had put out a new album not long before, which might explain her high ranking. She had also appeared on a recent New Music Friday update on Spotify, as well as on 23 other editorial playlists. But despite what appeared to be a consistent DSP push, Maeta had a low count of TikTok Track Posts and didn't appear to be making a big splash on the platform. While her music is smooth and has a tight mainstream sound, it seemed wise to wait a few more weeks after the new-release buzz to see where she stands on this list.

Elsewhere in the data there appeared a few names familiar from TBC’s own Sound Signal newsletter: That Mexican OT and 6arelyhuman. Existing in niche subgenres of Texas trap music and '00s nostalgic hyperpop, respectively, these two trending artists were flagged by experts with knowledge in the scenes where they were gaining traction. TBC had been following That Mexican OT since last year—at the time, he only had modest hits on Spotify, but he was clearly unique, with his Spanish-language bars drawing more from U.S. classic rap than reggaetón. That alchemy paid off: The runaway hit “Johnny Dang” brought him up to a TBC Artist Score of 79 in spring 2023, and he also saw a huge increase in TikTok and Instagram followers. Likewise, the gloomy club music of 6arelyhuman, who was highlighted in a Sound Signal post in early May, has continued to resonate with listeners, with a TBC Artist Score of 75, explosive growth, and more than 100,000 TikTok Track Posts. 

The process of identifying emerging artists will never be simple, given the multitude of evolving factors in artist popularity and listener attention span. No single artist is going to have the highest streaming numbers, social stats, and viral short-form video posts all at the same time. But with a combination of clear, stratified data and informed, focused human skill sets, it's entirely possible to pinpoint an artist with a real spark, who has the potential to succeed in whatever niche genre they belong to.

Third Bridge Creative specializes in the application of a proprietary music intelligence tool and approach. If you'd like to learn more about how we can help make the most of your analytics, please contact us here. And if you'd like to receive a biweekly digest of the artists, tracks, and scenes our music intelligence experts identify, sign up here to receive our Sound Signal newsletter.

What Is Music Intelligence?

We're at least two generations into the world of big data, where data points are generated by the millions and uses for them are multiplying exponentially, all the time. Data can be a powerful tool for understanding what's happening around us and making educated guesses about what's going to happen next. But it's only one type of information, and it will never completely unseat human intelligence and intuition as a likewise valuable tool for evaluating context. This is especially true when it comes to realms that are non-scientific, such as culture. Culture—meaning all the creative and decidedly human things we generate and exchange—is unpredictable and irrational, and that's a large part of what makes it so interesting. 

Every day, people around the world are listening to music using dozens of platforms. And that generates big sets of data that can provide some level of insight into what's trending and what is meaningful. With something as subjective and amorphous as music, though, the cultural knowledge and intuition of humans is essential to making sense of the data. The key is to connect the quantitative (the data) with the qualitative (the human insights that contextualize it). Using those two analytical perspectives in tandem, it's possible to make sense of a mountain of information—combining, sorting, and analyzing it to discover where tastes, trends, and creativity are headed. 

We're calling this music intelligence. The term refers to collecting and analyzing music consumption data and looking for patterns and also diversions from patterns, and then interpreting that information using human knowledge and learned intuition. This approach creates countless opportunities for companies that work in or partner with cultural enterprises of all kinds, including music. 

Distinguishing a blip from a pattern

Take the quirk from late 2022 where Lady Gaga's "Bloody Mary" (off 2011's Born This Way) saw an abrupt spike in traffic. What was going on? The wildly popular Netflix series Wednesday had featured a very memorable scene where the titular character performed a dance right up there with Napoleon Dynamite's most indelible sequence in terms of wonderful weirdness. TikTok noticed. TikTok could not resist the temptation to meme it to infinity. But instead of setting the memes to The Cramps' "Goo Goo Muck," which Wednesday danced to in the episode, the world of TikTok landed on "Bloody Mary." The platform is known for being an incubator where ideas get melted down, stirred together, and spat out as something new. But it takes human understanding to follow the data up the chain, find its apex, and contextualize a phenomenon so particular to its moment. 

Viral trends on social media, like that one, often drive surges in catalog listening habits, and music curation projects need to examine those trends in order to understand what is happening and why. Then they can use that information to create experiences that are relevant and compelling to listeners. They can also assist owners of vast swaths of user-generated music in identifying the value in their portfolios in ways that are meaningful and even predictive. And the marketing departments of streaming platforms need data to identify and engage with highly relevant, on-brand emerging talent. 

Drilling down

Doing this work effectively requires designing systems that strategically intertwine human expertise with the data, each providing checks and balances on the other. The first step is analysis of the data points, including their sources. For example, an artist or track surging on TikTok is an entirely different phenomenon than one surging on a traditional DSP. The music on TikTok is often not the centerpiece of the content, and while a spike on that platform can sometimes lead to lasting success, it’s frequently ephemeral. To get a sense of what direction a trend is headed, that signal needs to be analyzed alongside ones from platforms where music is the focus.

With understanding of the significance of relevant data signals in place, it's possible to construct a simple algorithm that establishes baseline criteria around artist performance across multiple platforms and then weights those signals appropriately. This algorithm can sift a pool of artist candidates to see which of them are likely gaining serious traction, versus enjoying a viral flash. Literally millions of artists (and AI bots) are looking for their big break at any moment, but only a fraction have the skills and timing to earn it. 

Human intelligence re-enters the process at this point. Metrics measuring engagement (the number of people listening) and velocity (how quickly that number is growing) are invaluable, but in isolation they can be misleading. That's where highly specialized music experts spanning genres, scenes, and territories lend the big-picture context that's crucial to identifying what's actually happening. This team can include taste makers, DJs, writers, and people who are themselves musicians, past or present. They can discern the difference between an emerging act being signed to a buzzy label and a sound or genre entering the actual zeitgeist, making it more likely for adjacent artists to gain a broader audience. With the list of relevant emerging talent now sifted again, the remaining pool can still be large—as many as 1,000 artists.

To further winnow it down, data and human intelligence need to operate in tandem again. An algorithm that looks at the variance in the performance metrics between the remaining artists can produce a simple weighted score that accounts for those signals. The above visualization is an example of a Third Bridge Creative tool that presents a score to allow a subject matter expert to quickly orient around priority artists. This score enables the expert to provide the final—and crucial—layer: actually listening to the artists and evaluating their music and brand. This is perhaps the most important step, because regardless of what the data indicates, an artist is not going to be popular if their sound isn’t compelling. 

Though the process laid out above is oriented around identifying emerging artists, music intelligence isn’t a single product or service. It's flexible and modular, a highly customizable approach to strategic content development and decision-making. The insights can identify trends in catalog music or help streaming platforms prioritize new releases. Marketing teams can also use the information to identify trends within the music world so they can make key alignments. 

In the current world of music consumption, where more than 7 billion tracks are streamed every day, it's impossible to keep track of what's going on without the benefit of data. But to use that data effectively and figure out how to anticipate which 7 billion tracks will be queued up tomorrow and also next month, human intelligence is equally essential, and this is where the music intelligence approach produces results that either method can't achieve alone. 

Third Bridge Creative specializes in the application of a proprietary music intelligence tool and approach. If you'd like to learn more about how we can help make the most of your analytics, please contact us here. And if you'd like to receive a biweekly digest of the artists, tracks, and scenes our music intelligence experts identify, sign up here to receive our Sound Signal newsletter.

Five Unconventional Approaches to Digital Music

It's news to no one that streaming has radically changed the face of music in terms of both consumption and business models. In 2022, for the seventh year in a row, recorded music revenues grew to (ahem) record heights thanks to the wide adoption of streaming by companies, artists, and fans; streaming accounted for 84% of the industry’s $15.9 billion in recorded music revenue, according to a year-end report by RIAA. But many musicians and industry vets yearn to see the industry grow in more ways than just financially 

Thankfully, there’s still plenty of room for experimentation, and has been demonstrated in this industry time and time again, sometimes the biggest ideas come from the bottom up. Here, we take a look at a new crop of digital-music providers who are stretching the idea of what a service can look like.

Resonate

Resonate is a music-streaming platform that operates a little like your local farmers’ co-op. Listening fees are structured on a system called "Stream2own": The cost of one stream for a song starts around $0.25, and the price per listen doubles with each successive play. If you listen to the same song nine times, the track is free forever (ie, you "own" it). It’s a simple way to guarantee streaming revenues for every artist on the platform, and for listeners to have visibility into where their money is going—70% of revenues go directly to the artist. As a democratic, cooperative organization, Resonate is owned by a community of musicians, listeners, and developers who each provide their own contributions and have a say in key decisions made by the co-op.

Marine Snow

Logging into Marine Snow’s interface can feel like entering an exclusive, high-end auction house, with an emphasis on undersung but innovative art. That’s how cofounder Tony Lashley (who has industry experience working at SoundCloud, Spotify, and Frank Ocean’s Blonded) sees the experimental streaming service, which pays artists an upfront sum that he claims is equivalent to 500,000 US streams. In exchange, the artists grant Marine Snow exclusive rights to host their music for 90 days. Listening is free (for now), and the mobile app pushes a music-discovery experience, offering daily digital capsules that open with random songs and chances to earn "shards," which build more song capsules based on interacting with the app.

Catalog

Catalog doesn’t offer exclusive streams of music. Instead, it sells non-fungible tokens (NFTs) of lots of unique, one-of-one digital records. Selling music on the blockchain creates a transparent path that will always lead directly to the original digital recording, regardless of how many buyers exchange it with or how many copies and remixes of the work are made. Artists own 100% of the first NFT sale of their work, and can determine the percentage they collect from secondary sales. And since the music copyright itself is still owned by the artist, they can determine if NFT buyers get perks or rewards for choosing to be a part of their musical journey—like when indie-pop artist VÉRITÉ offered behind-the-scenes access to the making of her latest album while it was still in progress last year.

Sound

Sound is also a service that operates in the music NFT space, placing an emphasis on community engagement and fan/artist relationships. Artists drop songs via listening parties, which go live on the site a few times a day. While the song is playing through its first public stream, listeners have the option to mint a limited amount of exclusive NFTs of the song and leave a comment on a timestamp as a way to make their early support of the artist known. By creating scarcity in the amount of NFTs available for each track, Sound encourages listeners to back rising artists at early career stages, and build their own collection of unique NFTs. Artists also keep 95% of revenue made from their primary NFT sale and can offer perks for token holders, giving them autonomy on how they want fans to interact with their music. 

Royal

At Royal, streaming music comes with an extra perk. This Web3 investment company cofounded and led by DJ and producer 3LAU highlights the ever-evolving relationship between artists and fans by allowing them to have a shared stake in streaming music. Royal allows users to to buy percentages of song or album royalties, giving artists revenue upfront and listeners the opportunity to invest in music they care about and want to succeed. Listeners buy percentages through tokens, which can be sold on secondary markets, and artists can choose what percentage of their rights to offer. While the company is just a few years old, it’s partnered with huge artists like Nas and Diplo and has paid out over $100,000 in royalties from artists to fans.

Are We the Music Brain?

In the last couple months, there's been so much written about AI generally and large language models specifically (ahem, Time Magazine cover...) that the basic logic of these tools is not much of a mystery. In my layman’s understanding, the way a lot of popular AI apps work is that they’re trained on huge datasets of text, audio, images, etc., and then used to generate new content based on the patterns they've learned to recognize and predict. This result is the impressive applications that’ve captured the zeitgeist: automated translations, generative music and image apps, and of course the lifelike conversations and cogent college term papers outputted by the likes of ChatGPT.

Like everyone else, I've been fascinated by these developments, and it led to the following 3 a.m. thought: If you're looking for an AI to generate content related to music—from an artist biography to a list of songs to more sophisticated requests—to what extent is the resulting content that gets hurled out based on work that I or my colleagues have had a hand in creating?

Perhaps this seems absurd on its face—who would ever claim to have made such a significant contribution to a knowledge base as vast as Recorded Music? But bear with me for a second... For completely unrelated reasons, TBC recently decided to tally up the number of content pieces we've created in the nearly eight years (as of this writing) that we've been doing this work. Get a load of these numbers:

  • 12,571 Artist biographies, totaling approximately 4.6M words. 
  • 21,637 Original Playlists and Playlist Updates
  • 58,504 track evaluations, which means our team listened to each track, adding metadata and assessing it for inclusion in various playlists, stations, and collections
  • 2,275 blog posts and feature articles
  • 32,783 in-product descriptions of playlists, albums or tracks

Now maybe you're impressed or maybe not, but that's just the work that our company itself has delivered to clients. What if you consider the collective output of our contributors? Electronic music shaman and founding TBC contributor Philip Sherburne publishes in the ballpark of 130 articles a year for Pitchfork, and has since 2014. Pop savant Maura Johnston — who's created 3,195 individual assignments (!) for TBC since our founding — is similarly prolific, publishing upward of 200 pieces a year for places like The Boston Globe, Entertainment Weekly, and Rolling Stone. These are just two examples of the roughly 300 folks around the world who work on our projects. And those are just the online articles: Several of our contributors are published authors. So yeah, do the math... I don't think it's a stretch to suggest that the work of Third Bridge combined with its contributor network comprises a non-trivial percentage of all contemporary music writing. I'm not saying it's a large percentage, just that it's statistically significant. 

There is currently a series of class action lawsuits being filed against a handful AI companies alleging copyright violations, and these could have serious implications on the future of AI-generated content. In the case of a painter like Kelly McKernan, the legal grounding for such a claim is more obvious: People are using a generative AI tool such as Midjourney to request images in McKernan's style using McKernan's actual name. No one's out there asking ChapGPT to "Write a Kendrick Lamar bio in the style of Third Bridge Creative," not least because we nearly always provide our services on a white-label basis, meaning anonymously as far as anyone scraping the internet is concerned. And look, I'm not trying to compare apples and oranges, or suggesting any kind of infringement in our case. I'm merely pointing out that I'm not the only one wondering about the provenance and underpinnings of some of these datasets. 

It gets even weirder when I think about how long we’ve been doing this work. Long before Sam and I started Third Bridge, we worked together at a company called Rhapsody, which as Wikipedia will tell you "was the first streaming on-demand music subscription service to offer unlimited access to a large library of digital music for a flat monthly fee." At Rhapsody, we were part of a staff of music experts whose whole job was to catalog the entire universe of recorded music. The colorful history of this group and the way it essentially wrote and curated (and partied) its way to laying the foundation of streaming music is a story for another day, but the gist is we collectively wrote millions of words, programmed thousands of tracks, made zillions of genre and artist associations (which today would be called "tagging"). Some of this data made it out into the ether of the larger internet and some of it remains entombed on some server, and we'll probably never know which is which. But then, who knows how these mysterious datasets that get fed to the AIs are themselves unearthed and organized? 

Back in the Rhapsody days, there was this beloved writer named Mike McGuirk, a genius wordsmith and passionate music fan who by his own admission would have remained a line cook had a friend not recruited him to join an early incarnation of the team. McGuirk was one of the most prolific writers we had, who wrote upwards of 30 blurbs a day every day for several years, covering everything from Cher to Lightnin' Hopkins to Florida death metal legends Deicide. And Mike was kind of your '80s-Bill Murray-type irreverent joker, and so every now and then he'd insert either a subtle wink or blatant non sequitur into one of his blurbs, which was certainly not allowed but which we all secretly got a kick out of. One of my favorites is the line he appended to a review of a record by the oft-derided '90s nu-metal group Creed, beautiful in its simplicity: "Wrestling is fake." 

Now, I'm certainly no expert in how large language models work, let alone AI algorithms writ large. But based on everything I've just explained, it seems at least possible that me and Sam and the cast of characters we've had the great pleasure of working with these last 20 years have had some kind of hand in shaping the output of whatever music-related request or question someone asks an app like ChatGPT. And if that's true then it seems likewise possible that deep inside the gray matter simulacrum that is this emerging new technology, there exists the sensibility of Mike McGuirk. So if you detect a hint of sarcasm the next time you ask a chatbot a question about Cher, then perhaps that's where it comes from.

Beats, Rhymes and Phife: A Data-Driven Look at a Tribe Called Quest

This post was originally published on April 1, 2016, on The Dowsers, a “magazine about playlists” produced by Third Bridge Creative. You can read more about that project here.

The following post and accompanying graphics are based on data provided by our good friends at WhoSampled, which manages the largest repository of user-generated sample data on the web. Download a hi-res version here and here. Graphic design by Studio Wyse. Illustrations by LeeAndra Cianci.

Consensus has it that the musical touchstone of my generation—the single point in our cultural history that every obsessive remembers—came when Nirvana’s “Smells Like Teen Spirit” blew up on mainstream radio. It’s a JFK moment; many of us can recall where we were the exact second we heard those big, clanky chords. From there, our eyes were opened and the world expanded.

But, in the end, that was more of a black hole. Kurt shot himself, and rock began to eat itself, iterating through various stages of post-grunge, retro rockabilly, rock-rap and other sounds until it became a parody of itself, a fount of boardroom nihilism and artistic inertia. These days, instead of Nirvana, I prefer to remember the first time I heard A Tribe Called Quest’s “Can I Kick It?” I was in a friend’s bedroom in Charlotte, NC. It was around midnight and a bit before my 14th birthday. We were reading Batman comics and dreaming of Gotham, or, really, anywhere other than the staid homesteads of suburban North Carolina.

As music nerds, we’d already digested the Velvet Underground and De La Soul, so we instantly got Tribe’s vibes and references, but blending these two opposing worlds—despondent, glamorous sleaze rock and idiosyncratic, jazz afrocentrism—was a revelation. And their debut, People’s Instinctive Travels and the Paths of Rhythm, was all about connecting the cultural dots. They created universes by cobbling together post-bop saxophones, rolling bass lines, and hard boom bap beats, topping them off with Q-Tip’s fluid freeform rhymes that played an alto sax to the gruff, declarative blurts of Phife’s deceptively straightforward lyrics. 

Main Graphic with Animated Characters

That basic formula was there from the beginning, but it changed over time, and this evolution opened up hip-hop, changing its sound and its listeners forever. On their 1990 debut, jazz comprised nearly 20% of all samples. Compare this to 3% for hip-hop overall for that same year. As where other producers were sampling soul (50%) or other hip-hop songs (28%), Tribe was drawing from Cannonball Adderley (“Footprints” and “Bonita Applebum”), Lou Donaldson (“If the Papes Come”) and Weather Report (“Mr. Muhamad”).

Many people will put them in the context of fellow Native Tongue groups such as De La Soul, but that’s not entirely fair; on the quintessential album De La Soul is Dead, that group only used jazz 4% of the time—the majority of their samples came from soul (39%), hip-hop (31%), and rock (15%).

On “Rhythm (The Art of Moving Butts)” from Tribe’s sophomore album, The Low End Theory, Q-Tip raps, “Not selling out, that’s a negative, love hip-hop, love heritage.” It’s one of those value statements that resounds with the young—absolutist, purist, and strong. But it’s also fundamentally conservative, and, between 1991 and 1993 (the year they released Midnight Marauders), Tribe were anything but. Pivoting off the ideas that they laid down on their debut, they created an aesthetic that blended this heritage (which, for Tribe, was jazz) with the more wizened and grimy hip-hop sounds of the time for something that sounded amazingly current and completely singular.

For Low End Theory, jazz comprised 29% of all samples; for Midnight Marauders, that number was 31%. The types of jazz they sampled also changed. While they still leaned on greats such as Eric Dolphy (“Sky Pager” ) and Art Blakely (“Excursions”), they were also pulling from Latin jazz of Cal Tjader (“Midnight Marauders Express”) and the soul-jazz of Brother Jack McDuff (“Scenario”).

But, more so than just the music, there was another big change: the emergence of Phife. As we show in the graphic above, he only accounted for 10% of all verses on their debut (with Q-Tip delivering most of the rest). That number grew to 26% for Low End Theory and 39% for Midnight Marauders. The story goes that Phife was diagnosed with diabetes during the recording of Low End Theory, and, getting a glimpse of his own mortality, was determined to build out a legacy. He pushed Tip to both let him be a larger part of the group and for both of them to refocus their efforts. Tip wisely agreed.

Much has been made of Phife’s conversational flow and everyman persona, and the balance they brought to Tip’s more “abstract” style cannot be understated, but he also brought in both a playfulness and a set of references that allowed the group to create a more fully formed worldview. One way to look at this is the various allusions that they made to other musicians, obscure cartoons, Blaxploitation icons, various product pitchmen, DJs, and basketball players. For a kid in North Carolina in the ‘90s, this served as a hip-hop Tumblr, collecting an entire universe that was both familiar and alien.

On their debut, they referenced a total of five athletes, musicians, and movie/TV personalities. On Low End Theory, that number had grown to 70, and, by Midnight Marauders, it hit a peak at 86. Phife pushed them in this direction, but Tip certainly played along. On “Check The Rhime,” Phife drops a reference to the Energizer Bunny while Tip conjures Mr. Clean. They were different dudes, and their references reflect that (Tip drops an allusion to revolutionary black choreographer Alvin Ailey, while Phife brings up the Power Rangers), but it all worked together.

Over the years, this would change. On their lukewarm 1998 album Love Movement, Phife only had 22% of all verses, jazz had receded to 25% of all samples, and the river of cultural references had dried up to a trickle. But, for a few years, there was no group that did it better, and that sound became the template for everything from ‘90s headwrap rap and neo-soul to the smoothed out melodies of The Neptunes’ middle period. Eventually, this sound was so ingrained into our musical landscape that it became a cliché. But, in 1990, hearing it for the first time, it sounded like something wholly new and revolutionary. In the subsequent years, many of us have gone searching for that sensation elsewhere, with varying degrees of success. But in 1990, sitting on my friend’s bed and leafing through DC comics, it was unmistakable. We may have lost Phife, but those moments will be with us forever.

Bar chart_Tribe (1)
Stay in touch