Years of Searching for a Community of Peers – A Few Previously Unpublished Thoughts

Revisiting a few of my unpublished articles, notes, and remarks from the last six years, I felt a compulsion to record them once and for all for whatever the act might be worth. I’d also dusted off my bookmarks folder of all my favorite socio-cultural blogs only to find that every one of them had since been retired and stripped from the Web. That realization reinforces my resolve to document these thoughts and questions which still remain on my mind years after penning them and my search for a community to share these ideas.

Here they are…

Copyleftism, Open Culture, and the Future of Mass Media: A Brief (Immediate) History of Media Culture

03-12-2016 (prev. unpublished)

In the last decade, we’ve seen the growth of niche markets and the rise of user-generated content as Youtube and Netflix quickly replaced television in millions of households.

Similarly, annual revenues of subscription-based music streaming services are on the rise while physical media purchases continue their rapid decline, (excepting the niche used and new vinyl markets with yet another year of monumental growth.)

Subscription-based media access is quickly replacing broadcast packages, where for a fixed monthly fee consumers can access any media under the provider’s network of licenses (Spotify and Netflix are this year’s most active examples.)

And media streaming hardware is gaining popularity, as Roku, Apple TV, Chromecast and Amazon Fire TV are each vying for the public dollar.

In the 3rd quarter of 2014, mobile use hit critical mass, rivaling television use in hours-per-day.  The smartphone and the tablet were proudly dubbed, “America’s First Screen.”  This is a direct reflection of the way users get their news and information and consume their media in the digital age.

The democratization of music-making and filmmaking technologies has made user-generated content a critical element of our global culture.  At present, 300 hours of new user content is uploaded to YouTube every minute.  And, paired with social media, user content can have instant exposure to millions of potential viewers with little to no distribution expense.

More important still is the continued-growth of the Open Culture movement.  Wikipedia has become a global primary source of information and has spawned innumerable spin-off wikis of their own.  Creative Commons makes content shareable and relevant as users are free to copy, transform, and combine ideas instead of creators scrambling to secure their works under digital lock-and-key.

The GNU Project, Copyleftism, and Open Culture are growing and having a greater impact on the world with each passing day.  Many major universities have opened their digital doors, offering online course material completely free to the public, and an ever-increasing number of texts, films, and music albums are finding free and legal accessibility on the web.

What does the future hold for these cultures?  By what system will creators be compensated for their work in the digital age?  Will media conglomerates succeed in locking down content, further-extending the reach of traditional copyright?  Will the public passively accept forms of DRM as simply part of the digital territory?  What lasting-impact will increased media accessibility have on the global audience?

And what’s next?

The following short piece was composed as a conversation with myself fleshing out the undeniable conflict surrounding the future publication of my book on mass surveillance, digital privacy, free culture, filesharing, and its impact on previously-reining media distribution models.

This write up, concluding with an intimate conversation with a scholarly peer, helped me arrive at a very difficult conclusion about my work.

Free

09-03-2016 (prev. unpublished)

I find myself faced with a terrible and heartbreaking conundrum. I’ve written passionately about the subjects of filesharing and of digital privacy for some time now. And to speak of one without acknowledging the other does a great disservice and misrepresents the very real circumstance that we face as a global culture. So both must be addressed.

Sadly, these subjects are strangely taboo in the economy of published works, as the acts are ostracized and demonized from the global conversation. It is inconsequential whether or not filesharing is a moral act, though there have been numerous examples in recent history demonstrating circumstances where they serve a far greater morality than the illegality of the act, itself.

It is understandable that anti-authoritarian reference texts by their very nature had to remain somewhat under-the-radar throughout history and in times of revolution. But in an age where subversive guides to filesharing and the protection of anonymity are a single Google query away, why does the world have to pretend that it is a secret anymore?

One might suppose that, if the establishment were to publicly acknowledge the actual frequency and simplicity of free media access, that the entire commercial market would crumble in a matter of days. Put simply, nothing can compete with “free.”

But in the age of mass surveillance, there has nonetheless been a tremendous clandestine tidal shift in the public conversation about any information unpopular with the powers that be. Society stubbornly ignores information which is readily and publicly accessible from any of thousands of sources which eliminate the relevance of commercial markets and services.

And this is the very conundrum I alluded to at the outset. In all likelihood, a book published outlining the simplicity and ease of filesharing and highlighting some of the greatest achievements in large, decentralized media library metamapping would be instantly struck down as a corrupt and evil text, and its author(s) would be punished to the fullest extent of the law for inciting anti-authoritarian thought and promoting illegal activities. The RIAA, international media conglomerates, and copyright troll organizations like Righthaven and Rightscorp Inc spend millions of dollars to make a public example of their accused infringers and a guide to its subversion would surely be rapidly extinguished.

There is also the dichotomy of the effect of sharing this sort of information to the public, itself. Those who wish to participate in filesharing already have the common sense to search for and educate themselves as to the best acquisition methods and means of protecting their anonymity without the need for a printed guide. (The internet already EXISTS.) So in fact, exposing this widely-practiced and incredibly simple activity to the public discourse may actually result in a net harm to the filesharing community.

The final factor of this puzzle is the nature of the format. The printed word, as beautiful, elegant, and surely powerful a thing as it is, is static and fixed upon the pages. Whereas discussions of emerging and ever-changing web technology are far better-suited to the dynamic and fluid environment of the net. Post-scarcity replicability, revisioning as networks and technologies rise and fall, zero cost distribution… each of these critically important factors make the internet – the very home of filesharing communities – the ideal means of disseminating related information. But as I’ve said – a simple Google search will yield all one needs to know. Numerous guides already exist – just none of them are acknowledged by the establishment.

The act of widely-publicizing the simplicity and commonality of filesharing might be enough to disrupt the status quo and inspire a global revolution of media consumption… I just don’t know if I’m ready to die (or disappear) for that cause.

Until 1987, (particularly before the passing of the DMCA), the publication of a work of this nature would have been plausible as I’d be protected under The Fairness Doctrine. My work would be justified as in the interest of public welfare and not as a malicious guide written to directly harm the media industries. However, the Doctrine was eliminated by the FCC in 1987. And the DMCA, (written by the RIAA and fellow industry giants), effectively eliminated any trace of that former protection, silencing this conversation and others like it from the public discourse. If the text were published today I would instantly become the target of countless litigations and would be sued in perpetuity. Most likely, my credit would be eliminated and my wages garnished by as much as 60%, destroying my livelihood in the US. My only course of action would be to flee the States and to seek asylum under a foreign government (or lack thereof), and to live out the remainder of my life in exile.

This isn’t just a statistical likelihood. Based on the legal actions of the media industries in their war on piracy, these lawsuits are a guaranteed and inevitable eventuality – precisely the reason that books of this nature do not exist in print, but are instead bound to quiet circulation in less-conspicuous digital environments.

And after constructing a spreadsheet and a library of over 130 books on related subject matter, I penned this note.

Untitled Note

08-27-2019 (prev. unpublished)

I’ve compiled 100+ books on the subjects of Free Culture, Open Culture, Copyleft, Creative Commons, The Post-Scarcity Digital Economy, Linux, and Pirate Culture from The Cathedral & The Bazaar to Galloway’s The Four

But the majority of these texts were published before 2010. I’ve pored over metadata on several sites and the only recent publication I’ve found is The Essential Guide To Intellectual Property by Aram Sinnreich; (I LOVED his book, The Piracy Crusade).

Surely the subject isn’t dead? Doesn’t the streaming service revolution, the struggle for artist compensation, and the ever-increasing consolidation of content distributors warrant further discussion of the matter?

Am I missing out on a wealth of analytical and philosophical texts about the digital economy?

As we enter the closing months of 2022, I’ll continue my search for a community where these ideas are actively discussed and debated. Perhaps one day I’ll find peers with whom to engage and further this discussion.

I welcome my readers’ ideas.

The Return of gmusicbrowser!

Such an exciting day! I happened to visit omgubuntu.co.uk and a headline caught my eye from December of 2020 which read, “GMusicBrowser is Back From the Dead with New GTK3 Port.”

This was thrilling news, as gmusicbrowser was my favorite large music library manager for Linux back in 2015. Back then I’d published an article after discovering the application and had described it as, “a robust utility with impressive handling for libraries in excess of 100,000 tracks, and best of all – a fully-customizable interface.” Sadly, development of the application halted several years ago and the Ubuntu Software Center retired it in favor of the simpler but powerful Clementine application. If you’re curious, Slant.co published a detailed side-by-side comparison of the two applications here.

Searching the web for more news on the release I found an article from March 1, 2021 on Linux Uprising titled, “gmusicbrowser Music Player Sees First Release In More Than 5 Years.”

While not available from the Software Center, installation is manual but fairly simple for Ubuntu users by downloading the .deb package at http://gmusicbrowser.org/download.html

This however was only half the battle for me, as I had painstakingly crafted a custom application layout for gmusicbrowser to let me browse my library by folder structure and by multiple points of metadata all at once. I dove into my archived documentation and was elated to find that I’d taken detailed notes on how to install the custom layout I loved step by step.

From my notes, I saw that the layout mine was based upon was titled “laiteAraknoid2” – one of several layouts included in a package formerly available from vsido.org. Sadly, the download link from 2015 was long-since broken, but ever-the-archivist, I found that I had downloaded and saved the package to my local file system along with an instruction guide I’d written on how to restore it!

I followed my six-year-old instructions to the letter, and was overjoyed when the next launch of gmusicbrowser instantly restored my custom tweaked version of the layout along with all my folder configuration and user settings! The entire process took fewer than five minutes! All that was left to do was rescan the library for all the content I’d added in subsequent years. Three hours and 45 minutes later I was all synced up and ready to go.

Here is a snapshot of the layout with one of my primary audio folders selected. I have a little tidying up to do with some of the metadata but that’s an advantage of this layout scheme, as I can quickly identify and correct stray tags. This will empower me to explore my library anew! Such a great way to begin the fall season!

Volume Leveling Server Project a Success!

I’m pleased to share my success with a project I first began in June of 2019 but had shelved until today! I’d constructed an ambient playlist on my server of ~130,000 tracks for background listening which I enjoy for an average of 19 hours each day while I work and while I sleep. Unfortunately I found that many tracks were mastered with considerable differences in signal processing / dynamic range compression / equalization. The result was that some albums had a perceived loudness far greater than others, which disturbed my concentration and my rest. 

Thankfully, a bit of research revealed that I was not alone with this concern, and that digital audio engineers addressed the issue by incorporating a feature into the ID3v2 standard outlined by hydrogenaudio as the “replaygain 1.0 specification.”

Most digital music library software applications feature a replaygain function, permitting the user to apply, automatically or manually, gain adjustment values stored in the metadata of the music file to nudge the volume up or down as required, and my Linux desktop audio software was among them. 

Automatic loudness measurement, (the formula for which is available on the hydrogenaudio wiki), can be applied to selected tracks individually, or to the loudness of an overall album. The album option, hydrogenaudio notes, “leave(s) the intentional loudness differences between tracks in place, yet still correct for unmusical and annoying loudness differences between albums.” 

The challenge was to find a mobile media server client which retained and interpreted the replaygain values during transcoding. I experimented with various mobile applications to find one which natively supported both gapless playback and replaygain.

Researching forum discussions on the subject lead me to an independent fork of my preferred media server application available for Android. The project was a success! After batch processing the replaygain values for the ambient segment of my library, the adjustments I applied to the track metadata were successfully interpreted and rendered during playback in the mobile application!

This small victory will have a profound impact on my daily and nightly listening sessions. I’m so glad I kept my notes and revisited the project!

Replaygain Screenshot 01-24-2020

The Ultimate Index v3.0 – The Innerspace Labs Media Exploration Master Workbook is LIVE!

It’s been a magnificently productive day at Innerspace Labs and we’ve reached what is to date our most prestigious milestone. I published a feature last March about the evolution of my life-long list-making of sound works, cinema, and literature that I’ve been meaning to explore. These lists also served to touch upon some of the special collections in my archive.

In the previous article I described how this process began with leather pocket journals, and as the scale of my library grew I began to publish annual print editions itemizing large collections.

Innerspace Labs Archive Index Books 2013

Innerspace Labs 50 Top Artists Book

These efforts were radically transformed several years ago when I migrated to Google Drive. But as the years passed and spreadsheets and documents multiplied, it rapidly became apparent that I needed to consolidate all of these various lists into a single, deep searchable index otherwise countless lists would be forgotten and disappear into the digital void of my Google Drive.

Thus began the Innerspace Labs Master Workbook project this past spring. Though this venture posed several new dilemmas. As the workbook grew to nearly 200 tabs, I received this error stating that Google Workbooks are limited to 5 million or fewer cells.

Google Error 5 Million Cells Spreadsheet Workbook.png

And it quickly became evident that navigation of all those tabs was painfully arduous in the mobile environment, as was its loading time. Thankfully, after careful research into various potential solutions, I’ve implemented a system of scripts and formula expressions which make navigating this large workbook a snap and its interactive response time nearly instantaneous.

By combining over 200 named ranges, and incorporating a primary dynamic drop-down and a dependent secondary drop-down field, along with an “=INDIRECT(CONCATENATE” expression calling named ranges based on user input, I’m now able to hide and lock all but one master sheet and made the entire workbook navigable from that single homepage.

The home sheet offers the user a primary drop-down of LITERATURE, SOUND, or VIDEO, which in turn controls a secondary dependent drop-down to populate and auto-alphabetize a list of all related content for that category.

I’ve also employed a script which is triggered by Google Clock to rescan the entire workbook for newly-added lists and to automatically incorporate them into the search fields alphabetically and by category as the workbook continues to grow.

I understand that it may not have significant value to anyone other than myself, but it’s intended to serve as a reference document along with the over 200-pages of archive summaries I’ve drafted in a companion Google Doc. With this easy-to-reference Workbook, I can pull up a list in seconds and start exploring. My hope is that the project helps introduce me to some spectacular content and that it helps me rediscover forgotten areas of my library.

The next phase of the project is to apply uniform formatting to all lists, as these were drafted independently over the course of nearly a decade, so I apologize for the crudity of its present format. And of course, there may be errors or omissions among the lists. But you know that I’ll work tirelessly to make this project as accurate and accessible as I can.

Here is a link to a copy of the latest version. It showcases and attempts to organize ~26,000 of the most noteworthy elements of my personal library and related subjects of interest. All cells are locked for editing except the two dynamic drop downs, which is sufficient for general users to explore and interact with the document. It’s far from perfect, but it’s a labor of love that I will continue to work on and which I hope will enrich my life as it continues to expose me to some of the greatest works of the ages.

The Ultimate Index: The Innerspace Labs Media Exploration Master Workbook

February has been a whirlwind of productivity and I’m excited to share the results of my efforts. Thus far I’ve introduced five projects. First I discovered that the disk snapshot solution I’d been employing for my server would no longer work at its current scale, so I had to research and implement a new solution. Once that was a success, I set myself to the task of merging and updating two music database systems I’d created years apart on two different operating systems. That was an incredible challenge.

The next three projects were featured here at Innerspace Labs – first the Nipper RCA “His Master’s Voice” project, then the six-hour drone high-fidelity ambient experiment with Eno’s Music For Airports, followed by the Fred Deakin archive update. But it was the sixth subsequent undertaking which would consume countless late night hours as the latest project continuously exploded in scope and scale, each time introducing new challenges to test my problem-solving skills.

For as long as I’ve been breathing, I’ve been compiling and organizing lists of all manners of subjects. I thrive creating order from chaos – chronicling and curating media of the 20th-century. As a young man, I penned lists in leather pocket journals but was frustrated by the fixed and static state of the data one committed to the page. I quickly graduated to Microsoft Office and then to LibreOffice, and by 2013 began self-publishing books of collected lists and spreadsheets to document the progress of my archive.

Innerspace Labs Archive Index Books 2013

Innerspace Labs 50 Top Artists Book

But the true game-changer came when I adopted the Google suite of apps, most notably Google Docs, Sheets, and the Google Keep task manager. These applications introduced undo history, increased accessibility, and most importantly, shareability to my list-making efforts.

Still, the seamless convenience of Google Drive came with a caveat – scores of lists once generated were quickly forgotten, and the sheer number of them made Google Keep and Google Calendar reminders cumbersome and an ineffective method of managing them at this scale. What I came to realize was that dozens of quality sets of information were disappearing into the digital black void of a Google Drive overrun with lists.

That’s what inspired this latest project. I decided to survey my entire history of list-making, compiling databases created in a wide array of formats and constructed on multiple platforms over the years, and to merge them all into a single workbook on Google Sheets. It was an incredible challenge, as the formatting of the data varied tremendously from .M3U to .PUB to raw .TXT to .XLS to proprietary database systems built for Windows XP (OrangeCD), to web-based database systems like Discogs and Goodreads which each offered .CSV exports.

To depict folder-structure-based organizational systems, (commonly employed for artists and label discographies), I utilized tree -d list.txt for large libraries. To extract %artist% and %title% metadata from RYM toplist playlists I’d constructed, I developed a spreadsheet combining four formulas to pull nth row values and to truncate “#EXTINF:###,” expressions and file paths from .M3U lists outputting a clean list of tracks.

In October of 2017 I’d authored The Innerspace Labs Journal: A Listener’s Guide to Exploration in Google Docs as a contextual survey of my larger collections. It spans eighty-four pages and includes an active hyperlinked TOC with an X.XX indexing structure and served my needs well for the past two years, but for simple down-and-dirty lists a spreadsheet seemed like a more accessible format.

Screenshot of Innerspace Labs Journal A Listener's Guide to Exploration

And so I constructed this latest effort – The Innerspace Labs Media Exploration Master Workbook – a cloud-based 180-tab set of spreadsheets combining all of my list data into a single, searchable, sharable index with a hyperlinked Table of Contents for easy navigation. The interface is intuitive, it loads lightning fast on even the most modest of systems and across all browsers and platforms, is mobile-friendly, and it will continue to grow as new content is introduced to my library.

The TOC is segmented into four primary themes:

  1. Literature and Essays
  2. Cinema and Television
  3. Sound Pt 1: Music Surveys, Best-Of Lists, and Guides
  4. Sound Pt 2: Artist Discographic Chronologies, Audiobooks, and Old-Time Radio Dramas

While a few of the tabs contain hyperlinks to lists from multi-page sites which do not send themselves well to text extraction, I’ve done my best to embed as much of the information as possible locally in the workbook, itself and to keep the layout consistently uniform to facilitate navigation and clarity.

Screenshot of Innerspace Labs Media Exploration Master Workbook

Unlike the self-published books or the somewhat daunting length of the Journal, this workbook is simple and localizes the data a viewer is most interested in exploring to a single, plaintext sheet for quick and easy reference. The shareability is key to aiding curious listeners/viewers in finding quality content relevant to their interests, and it is simultaneously a tool to empower me to delve into the many areas of my own library which I’ve yet to explore.

This is a milestone for Innerspace Labs, and I will continue to refine and expand the project into the future.

Personal Collection or Archive?: A Closer Look at What Defines a Library

archive

I was recently contacted by Dan Gravell, founder and programmer of the server-based music management software, bliss. Bliss received praise from Andrew Everard of What Hi-Fi and their official website calls it a tool “for people who care about their music collection.” Dan posed several questions about my library, and about what differentiates an average personal music collection from a true archive. He suggested that my response might prove useful as a journal entry at Innerspace Labs, so I’m sharing my response for others who might ask the same questions about their own meticulous collections.

So let’s dive right in –

Regarding the difference between run-of-the-mill “playable” music libraries and what one might call an “archive,” there are a few primary factors which could differentiate the two. The first is one of practical function and intent. If a library is for personal use for playback alone it is most likely the former, whereas a consciously organized collection of significant size and scope which is representative of a particular period or culture and which sheds contextual light on that era might serve a greater, almost scholarly purpose as an archive. Uniformity of structure, organization, navigability, and accompanying supplemental metadata enhance a library such as this to greater usefulness than mere playback. And it appears that it is precisely this focus on consistency by which Dan has endeavored to empower users like me with his bliss project. Another important factor is the long-term sustainability of an archive, which I’ll touch upon momentarily.

Next Dan asked whether my source media is exclusively physical. My collection comprises only a few thousand LPs, with a significant focus on the history of electronic sound. This spans the gamut from early notable works of musique concrète to the Moog synthesizer novelty craze, all the way through the international movement of ambient electronic music. I’ve also a predilection for archival box sets, like the Voyager Golden Record 40th Anniversary set with companion hardcover book and the special release from The John Cage Trust, as well as the previously unreleased collection of Brian Eno’s installation music issued earlier this year on vinyl with a new essay by Eno. But the bulk of my library is digital. This is both for practical and financial reasons, as digital libraries are far easier to maintain. (I don’t blog about digital nearly as often, as 450,000 media files are nowhere near as fascinating as a handsome limited edition LP!)

Dan also inquired about my workflow, which is critical to any archive. Early on in the development of my library, (around 2002-3), I began ripping LPs with the following process:

Exclusive analog recordings are captured using a Denon DP-60L rosewood TT with an Ortofon 2M Red cart, powered by a McIntosh amplifier (later replaced with a vintage Yamaha unit), and are saved as lossless FLAC via an entry level Behringer U-Control UCA202 DAC. I previously utilized a Cambridge Audio DacMagic DAC but after it failed I opted for the Behringer and it has been more than sufficient for my needs. Audio is captured using Audacity on my Linux-based DAW and basic leveling and noise reduction are performed but I minimize post-processing to maintain as much of the original audio’s integrity as possible.

Dan specifically inquired as to where the library information was stored (barcodes, etc) and asked about my policies on which metadata are included. This is fairly straightforward, as nearly all of the vinyl recordings I ripped pre-date the use of barcodes or were limited private releases with only a catalog number, which I bracket as a suffix in the release folder path.

Polybagged LPs are stored vertically and organized by primary genre, then by artist, then chronologically by date of issue. Due to the entropic property of vinyl playback, discs are played once as needed to capture the recording and subsequent playback is performed using the digital files. I employed a dozen static local DB applications over the years for my records, but eventually migrated to a Discogs DB which increases accessibility while crate digging in the wild and provides real-time market value assessment for insurance purposes.

But honestly, I almost never need to perform the rip myself, as the filesharing ecosystem has refined itself to the point where even the most exclusive titles are available through these networks in lossless archival FLAC with complete release details. There has never been a better time to be alive as an audio archivist.

Once digitized to FLAC, my assets are organized with uniform file naming conventions with record label and artist parent folders and parenthetical date of issue prefixes for easy navigation. gMusicBrowser is my ideal playback software for accessing large libraries in a Linux environment. Release date and catalog numbers have been sufficient metadata identifiers, as subsequent release details are only a click or a tap away on Discogs. Occasionally I will include a contextual write-up in the release folder where warranted, like in the case of William Basinski’s The Disintegration Loops 9LP + 5CD + DVD set as it related to the events of 9/11.

Next Dan inquired about how my archive is accessed. I employ Sindre Mehus’ Subsonic personal server application on my Linux DAW to make all of my audio and music video film content accessible from my phone, tablet, or any web-enabled device. I use both the official Subsonic app and the independently-developed Ultrasonic fork by Óscar García Amor for remote access of my library, (about eight hours daily). You can see a short video walkthrough of the features of the app that I put together here:

To return to his initial question about what differentiates a playback collection from an archive, my own library incorporates a few key factors which might lend itself to the latter:

– lossless bit-perfect FLAC wherever possible
– index documentation
– a systematic process guide for new acquisitions
– a 76pp manual highlighting special collections and large libraries of the Collection
– disk mirroring in multiple physical locations for preservation and sustainability
– fire protection for further indestructibility
– routine disk operation tests to mitigate risk of data loss
– complete discographic record label chronologies suffixed with catalog numbers
– elementary data visualizations created using Gephi and Prezi web-based tools
– the use of TrueCrypt whole disk encryption to prevent unauthorized access
– and the active use of Subsonic and Ultrasonic for enhanced accessibility

And scale is another noteworthy factor in my circumstances. Just to cite one example, I’ve collected every LP and single issued by the electronic duo Underworld that I’ve been able to get my hands on, and the digital audio branch of my Underworld collection comprises 482 albums, EPs and singles, including 2850 tracks and DJ sessions totaling well over 385 hours of non-stop music, spanning 36 years of Karl Hyde and Rick Smith’s work in all of their many incarnations. This collection is uniformly tagged, organized into a network of categorical root folders, and substructured into chronological subfolders by date of release. And the complete record label collections are a definite differentiator from the majority of casual-listening libraries.

I understand that my archive is small compared to the 12-20 TB libraries of some more seasoned users, but I feel that discretion and selectivity are virtues of my personal collection so that I can focus on only the most exquisite and remarkable recordings of my principle genre foci.

So what about your own collections? Do you employ standardized uniform file naming conventions and organizational standards? Do you supplement your library with relevant documentation to add context to your media? Does your collection offer insight into a particular era or musical culture? And do you take measures to ensure the longevity and sustainability of the work? If so… you might just have an archive.

Supplemental Note:

A good friend was kind enough to offer his thoughts about what sets an archive apart from other collections, and his remark was too good not to share. He said –

I think another major difference between the average personal collection and an archive is retention and adaptation.

A casual listener or collector wouldn’t have the retention of a true archive. The individual may build some playlists or even some advanced structure for locating and listening to music, but there is a very good chance that after some time, that particular music will get buried by the newer, or the most current thing the user is listening to. The casual listener may not want the huge or growing library, so when they feel they have moved on, the music will be removed from their collection. I cannot see someone who is keeping an archive remove anything from their collection. So retaining the entire collection and not removing anything because they are bored with it would be a difference.

I also mentioned adaptation. This is a rather basic idea but would be rather important in the grand scheme of things. Lets say you have a collection of 100 songs, all with 4 points of meta data. You realize as you begin to add more songs to your collection, a 5th point of data is needed. A casual listener may leave those 100 songs in the current state they’re in, with the 4 points of data. The archivist would need to go back, and add that 5th point to all 100 songs, and the new ones. Add another zero to those numbers and that can be a daunting, but necessary task for the archivist.

I really appreciated his input!

The Rise of the Collective Market

collective1-540x405_610_300_s_c1_center_center

Over the course of the last decade, we have seen a significant transition of power – the stranglehold of the market loosening from the hand of the corporate gatekeepers as they are largely replaced by more efficient systems built by the citizens of the internet.

These markets crowd-source the knowledge of community members who are proficient in a particular field of interest, who develop databases, forums for discussion, and flat-hierarchal markets in which to distribute goods far more effectively than by previous corporate models.

For example; Abebooks and Alibris each do a magnificent job of empowering consumers and booksellers alike, by creating an easily navigable flat structure marketplace where bookshops large and small can offer their titles to a global community without any additional overhead.  This creates a buyer’s market where millions of titles are available at impressively low prices.

abebooks-logo.gif

Discogs is another successful user-supported market.  The site’s users construct and maintain a detailed database and thriving marketplace of millions of music titles ranging from Billboard chart toppers to incredibly rare test pressings.  By adhering to a core, (and greatly facilitated) organizational structure of data submission, the site is able to crowd-source a vast and well-organized database.  The site also automates personal collection appraisals based on market history, right down to the condition of each item.  The site even offers catalog submissions via UPC scanning to make library building a snap.  And its marketplace is empowering for record sellers great and small as well as for music consumers the world over.  Like other online markets, there are significant cycles of inflation, but regulation likewise occurs naturally.

Discogs.Logo

Etsy offers a market for artisanal creative projects.  And Audiogon is a community to help educate users about pro audio gear with both a forum and a trade-and-sell market of its own.  For every need that arises, knowledgeable users in the community establish a market specializing in that service.  This is a core tenant of the cooperative nature of the internet community.

Audiogon.jpg

As with any eBusiness construct, several key advantages separate these ideal virtual markets from the antiquated corporate retail brick-and-mortar chain stores which came before them.  Firstly, their operating overhead is minimal to non-existent, whereas physical stores must constantly grapple with expenses like construction, maintenance, electricity and heat, staffing expenses, and insurance.  And the physical limitations of a building cripple a store front’s merchandise selection which is often restricted further by the distributors with which the corporation has aligned itself.

Target.Colgate Toothpaste Screenshot

By stark contrast, online markets shed all of the restrictions of physical space.  Most of these markets are user-supported so little staffing is required, and buyers can purchase any of millions of available products from other users anywhere in the world without corporate loyalty to a particular supplier.

These independent markets are far superior to their predecessors in every way, disseminating operating expenses and rendering the monopolistic behemoths obsolete and irrelevant.  And as digital media rises to overtake the physical goods market, this obsolescence will only exponentially increase.

We are witnessing the end of the gatekeeper era.  The Net has given rise to a new and better model of distribution –  marketplaces which empower buyers and sellers alike.  These markets, built upon fundamental automation structures and cooperative operation far more effectively serve the interests of the community.

As John Perry Barlow famously declared in his Declaration of Independence of Cyberspace to the governments of the world:

Cyberspace does not lie within your borders.  Do not think that you can build it, as though it were a public construction project.  You cannot.  It is an act of nature and it grows itself through our collective actions…

You have not engaged in our great and gathering conversation, nor did you create the wealth of our marketplaces…

You are trying to ward off the virus of liberty by erecting guard posts at the frontiers of Cyberspace.  These may keep out the contagion for a small time, but they will not work in a world that will soon be blanketed in bit-bearing media.

The century-long corporate dominance of our marketplaces is at its end.  Together we have built something better which works for all of us.

We have won.

From Subsonic to Ultrasonic – Do More With Your Media!

Friday evening was a night like any other, but as it happened this particular evening inspired a change to better my circumstances and proved to be most rewarding.

I was relaxing, reading a fascinating book on copyright reform, and enjoying my latest musical acquisitions via my Subsonic media server.  But as each track concluded and the next began, I repeatedly found myself irked by a 2-second mark of silence which persistently seized my attention and vanished my cozy, zen-like musical trance.

Subsonic is a brilliant and magnificent application, but gapless playback is not among its features.  And this periodic interruption was just bothersome enough to inspire me to take pause and find a better solution.  Within a few minutes’ time, I discovered that Ultrasonic – an independently developed Subsonic client, offered continuous playback as well as genre browsing and other features not available from the official Subsonic app.

After testing the application that evening I was so delighted with the result that I set myself to the task of creating a video feature to showcase Ultrasonic and hopefully empower other users like myself to do more with their media.  Google Play reports that only ~1000 users have downloaded the app, but as you’ll see from the feature below, it’s perhaps the best under-the-radar media client out there.

Check it out!

Published in: on May 7, 2016 at 7:30 am  Leave a Comment  
Tags: , , , , , , , ,

Is anyone else getting rid of their physical media altogether?

Now that I’ve purchased my first home, it seems a great time to shed some dead weight from my material possessions. My top 3000 LPs will stay – I’ve got them neatly shelved and organized in my office. I enjoy the ritual of interacting with the medium and nothing beats gatefold artwork. But everything else – cassettes, VHS, CDs, and DVDs, all seemed pointless to keep anymore.

Today I boxed up hundreds of CDs and traded them at a local Disc Exchange for 25 cents each. The cash I made was well worth the space it freed up on my bookshelves for music literature. (Most of the reference texts I enjoy I much prefer to read in a physical format than as an ebook.)

Of my ~750 CDs I kept only a handful from artists who really shaped my listening in the 90s. I kept several 20-bit remasters of classic jazz LPs and several debut singles like Reznor’s HALO 1 Down in It, Manson’s Get Your Gunn single and the Live at the Snakepit bootleg, and the 1989 Caroline Records debut single by White Zombie, Make Them Die Slowly. But other than a handful of cassette and CD promos, it really seemed time to let the rest go.

Honestly they will function more as interesting artifacts and conversation pieces rather than as a medium for audio/video playback.

I also spotted a large box of my fiance’s home-taped VHS tapes today. I offered to have her top 5 tapes converted to AVI and the rest we can dump.

Still, I confess – I’m keeping bargain bin VHS copies of cult classics including Santa Claus Conquers the Martians, YOG: Monster From Space, and the Pee-Wee Christmas Special… this is the shit I’m going to force my grandkids to watch someday.

So what about the rest of you digitally-savvy ladies and gents? Do you still hold onto physical media?

Published in: on October 10, 2015 at 9:46 pm  Comments (2)  
Tags: , , , , , , , , ,

The End of Scrobbling – A Farewell to Last.fm

Digital music has been a fascination of mine since the turn of the millennium.  Audioscrobbler came into being in 2002 while I was in college, and the thought of sharing my listening with a global network of musical peers was exhilarating.

Audioscrobbler merged with Last.fm in 2005, taking the social element of music to a whole new level.  There were forums to discuss listening trends, metadata analysis and recommendation engines… all while independent blogging exploded onto the scene in a flood of obscure music fetishism.

audioscrobbler

In the years since I admittedly lost touch with the service and dropped off the scrobbling radar to focus on personal relationships, collecting unscrobbleable LPs, and developing my career.  As the summer of 2015 came to a close my life was settling up nicely – I left Windows for Linux, I have a fiance, a fantastic career, and I’ve just purchased my first home.

With these stations of life secure, my mind returned to the world of scrobbling and the possibilities of merging big data and my own hyper-specific musical tastes.  I developed a ~500 day plan to scrobble every track from my library 24 hours a day for over a year to submit every title toward Last.fm’s recommendation engine.  Surely a library of over 110,000 tracks would produce some intriguing results!

But this evening, I logged into Last.fm and looked around to find that the site has retired all of its original functions.  The social forums are closed.  The “neighborhood” of your peers is now inaccessible.  The homepage offers only a most-popular-globally-this-week roster plastered with “Uptown Funk” and other predictable tracks.

last.fm top artists

The Wikipedia spelled out what I’d missed – CBS had acquired Last.fm for £140 million in 2009.  Wasting no time, in February of that year the service handed listener data over to the RIAA over concerns about a then-unreleased U2 album.  By 2010 the service closed the custom radio feature, (again over licensing issues) and in early 2015 they partnered with Spotify, further crippling the usability of the site.

But the nail in the coffin came in August of this year with their fully-overhauled website.  It received almost universally negative criticism from its users, who cited broken and missing features.

Given the new light of this information, I’m terminating the full-library scrobble project and saying farewell to Last.fm.  Still, I shall not mourn the loss for long.  The social function of digital music has experienced a parallel evolution in the world of private forums and closed groups on social media sites like Facebook.

Terry RIley - Persian Surgery DervishesA magnificent record I discovered thanks to a Facebook Record Community

Every morning I’m greeted with “now-spinning” rare vinyl treasures and independent music reviews which top anything you’d find from a recommendation engine.  One user from South Korea offered nearly 40 daily installments of records from his Tangerine Dream collection, each accompanied by a custom write-up on the featured release.

Private tracker communities, classic bulletin board systems, and other social structures of the web continue to serve as a brilliant resource for musical discovery.  Last.fm served us well during a pivotal time in the age of digital media, and it will be missed, but we’ll carry on.

last.fm

RIP Last.fm
2002 – 2015

Published in: on September 10, 2015 at 9:35 pm  Leave a Comment  
Tags: , , , , , , , , , , , ,