Sunday
Jul012018

I Heart Galleries

Leonardo Da Vinci invented the bicycle. At least, he made a really compelling drawing of one, a good three centuries before the technology required to actually build it existed.

Incredible! That little sketch alone is proof of his profound imaginative gifts.

Lower down on the very same page is another imaginative flight of fancy; a procession of dicks heads towards a slightly puckered hole with the name of Leonardo’s favorite assistant written over it. Talk about inspiration!

It’s hard to say which came first: the bicycle or the butt sex, but clearly one helped inform the other. And that’s how on one smallish page, Leonardo managed to demonstrate that the mechanism for accessing creativity means lifting the lid on a lot of things, and then having the guts to explore whatever might be in there. For that reason, the true difficulty in putting curiosity in charge is acceptance – that is, allowing the implausible, or the ridiculous, or even the forbidden, to lead one forward towards a destination unknown.

Leonardo’s notebooks are historic examples of what we now call pure research, which is the exploration of a subject simply because it seems interesting, rather than because it poses a problem that needs solving. Pure research sounds to our contemporary ears like a lark, an indulgence, a rat hole. Who in their right mind would support, with actual money, the pursuit of whatever-the-fuck just because it sounds fun? Efficiency matters, and pure research would seem to be the poster child for going in exactly the opposite direction.

Except that inefficiency pays. Here are the authors Benjamin Jones and Mohammad Ahmadpoor talking about the link between pure research, and the practical application of that research: “Uber and other location-based mobile applications rely on GPS to link users with available cars nearby. GPS technology requires a network of satellites that transmit data to and from Earth; but satellites wouldn’t relay information correctly if their clocks failed to account for the fact that time is different in space – a tenet of Einstein’s general theory of relativity. And Einstein’s famous theory relies on Riemannian geometry, which was proposed in the 19th century to explain how spaces and curves interact – but dismissed as derivative and effectively useless in its time.” (The Conversation August 10th 2017)

Plenty of useful things that we take for granted came out of pursuing interesting things, simply because they were interesting. Microwave ovens, lasers, Magnetic Image Resonance machines or MRI’s, and gene editing are just a few examples. It turns out that fully 80% of pure research ends up having an impact on market ready products somewhere down the line, which is an incredible figure.

The pure research approach paid off for Leonardo as well - and not because one of his works just sold at auction for a mind-boggling $450 million dollars. (Even with the full understanding that it only might be his). And not because another one of his works – The Mona Lisa – is the most famous painting in the world, by far. And not because he is still one of the worlds most celebrated artists, also by far, centuries after his death. The pay off was to change art itself, from a decorative trade, to one based on curiosity, research, and exploration. He managed to single handedly set a direction and a tone that is still being followed by artists today.

How did he do it? By being as famous for his research as he was for his art. Remembering him for his art alone would have always been a difficult task  – he was never a prolific painter, and today there are three or maybe four completed works by him that survive. Compare that to the 7,000 pages of his notebooks that are still with us, out of an estimated 13,000, and we find Leonardo’s real interests – geology, the tides, mathematics, human anatomy, flight, hydraulics, language, and more, all beautifully illustrated and annotated. Leonardo was the archetype for what we now refer to as the ‘Renaissance Man’, someone who sought knowledge for its own sake, and in so doing, became representative of progress itself.

Some out there might say that this ‘pay-off’ came a little late for him to actually benefit from it. Sure it’s great being viewed kindly by history, and changing the trajectory of art, etc, but that didn’t exactly help him with the day-to-day business of living. Luckily for him, he had a guy. A royal guy who basically allowed him the time and space to paint a couple hours here and there, and go off and dissect a cadaver now and again, or study a rock formation, all while still paying him. Clearly, he was an enlightened guy, or at least a sympathetic one, and that allowed Leonardo to follow his muse. Thanks for that Duke Ludovico Sforza!

But a little scrutiny makes clear that Leonardo’s situation was less than ideal. That’s because, being professionally beholden to the whims of royalty meant a 24/7 immersion in satisfying the needs of others. And those needs were pretty fickle - if you have ever wondered why there is so little work left over the entirety of Leonardo’s life, it has much to do with the fact that he was moved from project to project before he completed most of them, and that plenty of them were half-hearted to begin with. Insisting that Leonardo paint a mural on a wet wall is a great example of that – The Last Supper, which was painted for Ludovico’s wedding, started failing almost immediately after it was finished, exactly as one would expect. It was a wreck a little more than a year later.

That sucked. But the Last Supper did have a silver lining in that the Duke actually wanted Leonardo to paint something for him - for the most part, the Duke managed to keep Leonardo away from painting, and certainly away from his research, so that he could do other things. And what exactly were those other things? Party planning! And party decorating, ‘natch. ‘Party Planner’ would have been the first line on Leonardo’s resume, certainly above inventor, or artist. Sure, he worked with generals on war machines, and did some field engineering now and again, and sure, he made a painting from time to time, but the truth was that Lodovico liked a good party far more than all that other stuff. A bicycle, or a helicopter, or even a painting, not so much.

But what if there was a system in place during Leonardo’s lifetime that rewarded pure research, without any input from a royal patron? What if such a system worked by facilitating sales and commissions not just to royals, but to rich people from all over the place? What if such a system was agnostic about the actual physical objects that the artists produced? Imagine that this system created a culture that made daring, risk, and verve the main attraction to these patrons, and that the question became whether or not artists were daring enough. In other words, imagine for a moment that instead of working for the Duke, Leonardo had showed in a contemporary art gallery.

Boom!

Now just to be clear, no system is perfect, and certainly galleries aren’t. I’ll get to the dark side in a minute. But for now, let me just say that galleries, as we understand them in a contemporary context, exist as a way for artists to push their work forward in whatever way they choose. There are no editors for artists, like there are for writers. There are no studio executives demanding changes, like there are for film directors. There are no producers, like there are for musicians. There aren’t even clients, like there are for designers, and architects. There is just speculative, pure research. It’s hard to think of another commercial model based on as much freedom as the commercial gallery model is.

Here’s a list of some of the things that have sold in galleries: cans of an artist’s shit, an art dealer taped to a wall, a guy jerking off underneath an elevated floor, and a guy getting shot. The more people roll their eyes at this stuff, and the more people out there exclaim ‘That’s not art!’ the more it only confirms that the model is working: that pure research of a truly expansive kind is not only being done, but being seen, and supported. That’s just an amazing accomplishment for a commercial model to achieve. 

All of us who are working as artists today have been shaped, consciously or not, by an enormous sense of freedom provided to us, courtesy of galleries. If you are an artist working today in an idiosyncratic, and personal manner, continually challenging the past, thank a gallery for even allowing you imagine that as a possibility. Finding your own voice and all of that is a luxury that galleries made space for.  It’s allowed working artists to define ‘art’ in whatever way they choose, and then sell whatever their answer may be, via an established support system.

Holy. Fuck.

It’s a pretty good deal. It’s actually an unprecedented deal. Which is why so many people want in on it - the number of people graduating from college with an art degree exploded after WWII, and yet the gallery system seemed to be able to absorb a tremendous amount of them. That’s because galleries grasped early on the clear benefit of getting more people involved. Only by accepting, and in fact encouraging, a really wide spectrum of speculative creativity, was it possible to discover the diamonds in the rough. And galleries have managed to discover a lot of diamonds. As a result, there has been an ever-widening acceptance of what we now consider art, due to galleries showing us a lot of things that would never have been given a chance not so long ago.

At this point, some of you might be getting a faint inkling of a flaw in the system. And it has to do with how successfully we are implementing the idea of inclusion. It’s fair to say that what we all aspire to is an art world free of any assumptions about where good ideas might come from, or who might author them, etc. Pure research in its purest form, am I right? In practice, however, that perfectly open system hasn’t quite arrived yet. What has arrived instead is an overwhelming amount of people who have something to say, and have deduced, quite rightly, that there is a sophisticated platform to say it. The resulting stampede of people has so overwhelmed the art world that it’s necessitated a network of gatekeepers to work the doors as artists clamor to get in. Which is how the age of the gallery has slowly morphed into the age of the Curator.

Curators, gallerists, call them what you will, but their job is basically the same – they spend their days looking at a whole lot of art, and saying ‘No’ to most of it. Sure they understand the radical inclusion principal as a philosophical construct and all that, but the fact is, not everybody is Leonardo Da Vinci. The amount of artists may have exploded in the last 75 years or so, but the amount of genius artists has stayed pretty much the same. That means that the task of a curator, or gallerist, or collector, to successfully find said genius, requires sailing an ocean of shit, every day. As Oscar Wilde said: “All bad poetry springs from genuine feeling.”

Now finding good art, as it is happening, is tough. Despite the clear difficulty of such an undertaking, it’s remarkable how confident we are that our gatekeepers are succeeding - contemporary art consistently puts sky-high prices on things well before history has weighed in on them. And so far, nobody seems inclined to call that particular bluff. And to be clear, a system based on convincing enough people to simply trust the most charismatic, and persuasive gatekeepers out there is a bluff.

At some point, the public has to weigh in on a gatekeeper’s choices. They can’t be right all the time. Writers, moviemakers, fashion designers, or musicians, simply must connect with an audience in order to be relevant – it doesn’t have to be the biggest audience out there, but it has to be big enough, and passionate enough, to actually matter. The visual art world has essentially removed that component. That’s why it’s possible to walk through a contemporary art museum with a non-artist, and have them struggle to recognize, much less name, most of the artists shown. We’ve left it to the audio guide to tell them why it’s good.

Curators, and gallerists today, are by and large the product of an academic system that has made them into specialists. They are professional lookers, rather than artists. The problem with that is that art itself is not necessarily the product of an academy - if you think about all the musical genres that have come out of America, ragtime, jazz, blues, rock n’ roll, blue grass, rap, and so on, none of those particular genres were created, much less embraced, by the musical academies of the time. The thing that propelled them to relevance was the sheer usefulness they demonstrated for capturing the mood of a particular culture at a particular moment. Imagine that we traded all of that for symphonic music (the music of the academy) instead.

If we follow that analogy into the gallery system today, we see that the gatekeeper system has essentially undercut the way that galleries tried to involve a more general public, and have returned it to a more familiar system, where certain people, like the popes and royals of old, made all the decisions. Remember that the gallery system, in order to truly function, needs more people, more ideas, and more variation constantly moving through it in order to figure out what the most interesting thing is. The gatekeeper system on the other hand, streamlined that to the opinions of a very few. Sure there are loads of galleries out there. Excellent! But they are operating in a system that no longer needs them, or even wants them. Things have consolidated to the top, to a very small group of super-galleries, and that is where most of the actual art world resides. Today, the majority of galleries below that upper level are basically on life support.

Despite all of that, artists have little choice but to get in the way of these gatekeepers to have a chance at a career, because they are the only audience that matters. And usually the first step is to show in a gallery.

With that in mind, here is a short primer on how an art career in a gallery usually goes: for the first show or two, most artists essentially work for free, because sales will not realistically cover their costs. Also: there is a good chance that this will not end after the first couple of shows. Oops! What this means is that it is quite possible, and in fact typical, to actually pay to show. That’s crazy you say! Sure it is, and not only because so many people want the chance to do it. What’s really crazy is the amount. Consider the math: Artists are responsible for buying all their own materials, paying all of their rental costs, tool costs, health insurance, studio insurance, and so on. Artists run their own website and social media, because promotion is also part of an artist’s job. Typically, an artist shows every two years or so, which means the outlay of costs stretches over a very long time indeed. And in the end, the hope is that the show will sell enough, usually in a single month, that the previous two years of bills will be covered. That is, after the gallery’s cut, which is 50% of all sales.

I haven’t even got to the fine print yet, which is this: despite the fact that artists and galleries are, as mentioned above, 50-50 partners financially, the risk involved is in no way equally shared. That’s because galleries get twelve different chances, at the minimum, in a year to make sales. Artists on the other hand, have only one, every two years, for one month. (And yes, artists can make sales between shows. This does happen, but usually for established artists, and not for newbies.) Meanwhile, a gallery can lose money on a number of shows, and probably even most, but if there are enough strong sellers throughout the year to cover the costs, then the gallery makes money. For that reason, most galleries now don’t choose to nurture artists careers over the long haul, and instead head hunt talent that is hot in the moment. The gallery makes sales on low hanging fruit, and then later, they can simply drop that artist if they become less desirable. Brilliant! The truth is that in a business where all of the money collects at the top, most galleries couldn’t really operate any other way. Which brings us to now.

I heart galleries. The reason they exist at all is because certain brave souls based an entire business model on a simple, yet profound observation - audiences can be challenged with creativity, because audiences will look with creativity. That means a gallery only works under the basic assumption is that everyone is creative, and that artists and audiences coming together will be synergistic, and expansive.

I heart galleries because they encourage a pure research model, and as such, they are willing participants, and indeed key facilitators, in the ongoing discovery of art. By backing an artist, they back that artist’s exploration, and the gallery is then tasked with finding a way to determine the value of the outcome. For all the criticisms of galleries get, for all of the angst and hand wringing they inspire, most of these strengths that I’ve just described are still deeply baked into galleries today.

Still, it’s fair to say that artists aren’t exactly blind to the financial situation, which does put them in a significant pickle early and often. They’re also not blind to the concentration of influence at the top. Some of them probably think that galleries have always operated the way they do now. But consider this: at the beginning of the 20th century, Daniel Kahnweiler (Picasso’s dealer) would make regular studio visits to Picasso’s studio, and whenever he saw a particular work he liked, he would buy it. Over time, the works he collected would become a show that he would mount in his gallery. The sales in the gallery would be split 90% to Kahnweiler, 10% to Picasso. Over time, that shifted to 80/20.

That’s how the entire gallery system was run. Kahnweiler made sure that Picasso always had money, and never had to worry about financial ruin while he was making a show. The gallery system as it existed then essentially paid Picasso to paint. And in return, Picasso was motivated to work all the time, and make really good paintings. It also made sure that Kahnweiler was fully motivated to make sales - which involved a full-throated defense of Picasso as an artist - because he had already made a significant investment in the work by the time he showed it.

Over time, the system got overwhelmed with artists, and rather than figuring out a viable way to involve more people equitably, there was a return to a more familiar system of gatekeepers. What we are left with today is the premise of the gallery, which is no longer economically viable, grafted on top of a royalty/patronage model, which actually is.

So what would Leonardo do? Figure out how to get paid, that’s what. With that in mind, it’s worth remembering that the one thing everyone agrees on when they first see the Mona Lisa is how underwhelming it is. No one stays looking at it for long. And why do I mention that? Because being underwhelmed by the Mona Lisa reminds us that Leonardo was only a painter part of the time, by design. He never even delivered the final portrait. That doesn't take away his gifts as a painter, which were incredible, but he understood better than anybody that he got paid to be broadly creative, rather than to live or die on painting commissions alone.

He was aware of the fickle nature of taste, and would never have based his career on accommodating it. That’s why he wasn’t merely a painter; he was a thinker, a researcher, and a tinkerer. (And a party planner.) That whole package is what made him so important to his fellow artists. It’s what propelled Vasari to write about him in his ‘Lives Of The Artists’, and that in turn is what made him famous, then and now.

One more thing about the Mona Lisa: Leonardo would probably be amused by the fact that in order to even get to it, displayed behind its thick bullet proof glass, people have to walk by a whole bunch of other masterpieces, (including another by him), and they barely expend a gaze or a glance on any of them. They are singularly focused on the task at hand – to see the most Famous Painting In The World. And as they finally, finally, lay eyes on this important cultural artifact, this object beyond price, you can almost hear a faint scratching sound as they cross seeing it off their to-do lists, and then they slowly shuffle out.

Creative art lives when a creative audience shows up, and it dies when they don’t. A royalty patronage model does not require a creative audience, (it just needs a king) but a gallery does.

Yet... a lot of people seem to think, with understandable justification, that the gallery system no longer works, and we should just wash our hands of it. Galleries seem a little bit like democracy in America – an inclusive idea, that morphed into a pay-to-play system that grievously undermined it’s own principals. Maybe we should just toss it all aside… galleries, democracy…but before we throw the baby out with the bath water, consider the fact that the underlying idea is good. Like incredibly good. The pure research model the gallery system introduced is so good in fact that it has set the bar for everything else. If you are a furniture designer, or a public artist, or an illustrator, or a potter, or a tattoo artist, an Instagrammer, or just a regular old artist, the ideals and assumptions of contemporary art practice have fundamentally influenced you, have fundamentally convinced you that your individual voice is worth something. And it is worth something…

Because galleries. 

Sunday
Jul012018

We planted flowers

We planted flowers. They grew up tall and strong and wove together so tightly that the leaves and blossoms cut off the light, and we lived in darkness.

We dug at the roots, we sawed at the stems, we fought through the tangled leaves, until we collapsed, exhausted, and we slept.

One day the wind pulled the dead petals from above us. It pulled the dead leaves next, and showed the sky through thin brown stems. We picked a path through them, towards the gray day, remembering the scent and the color of flowers, and the buzzing of bees beside us.

Saturday
Feb172018

Industrialism

For thousands of years, art making in a western context was a trade - masters taught apprentices how to make objects. Now it’s a field for academics. Professors teach students how to conduct research. It used to be blue-collar, now it’s mostly white collar.

Sure artists still make stuff, lots and lots of stuff as a matter of fact, and because they do the persistence in making things means that making things matters. But any good idea has its limits, which is how, right about the middle of the 19th century, simple decoration turned into a cartoonish parody of itself.

Think ‘Liberace’ and you will get a general idea of the look. Think of human shit being cleaned out of the hallways of the Palace of Versailles once a week, and you will get an idea of the consequences. And I know, I’m mixing the Second Empire with the Baroque here, but stylistically speaking nothing much had changed for centuries. It all seemed based on the impression that glamour in the face of shit and poverty and death seemed like a legitimate way to escape it. And through the years and decades and centuries, more shit and more poverty and more death just meant that the decorative arts responded with more gold leafed cherubs and more curlicues.

It couldn’t go on like that, and it didn’t. Romanticism and then Realism were really the first shots across the bow, but the whole thing changed forever with Impressionism. That began the cavalcade of ‘isms’ that defined modernism, and modernism is what really changed art from a trade devoted to decoration, into a far more intellectual pursuit, devoted to ideas.

By now, making anything at all is unnecessary, as artists have moved away from being mere decorators, towards something far more intellectual. Now they are considered to be some hybrid form of philosopher, sociologist, anthropologist, psychologist, and critic. Most of them have taken advantage of this new and tremendous opportunity to invest their art with socio/political and psychological layers that wouldn’t have been possible a mere century ago.

There are really two questions that come out of such a seismic shift: How did it happen? And does it even matter? The answer to the first is ‘Humanism, which begat modernism, which lead to the industrial revolution’. And the answer to the second is Yes.

For people who have studied art history, it’s first important to point out that Modernism wasn’t a 19th century invention by artists - it was an enormous cultural shift that began a whole lot earlier, with implications that spread to every aspect of society. The thing that artists and designers now call Modernism came way after Modernism’s actual birth.

And that was in the 15th century by the way. It all began very quietly and unexpectedly, by bunch of ancient-manuscripts-loving monks, who where incredibly inspired by the writings of Greek and Roman writers. A group of them, most famously Poggio Bracciolini, had developed a mania for finding writings by people like Cicero, Virgil, Lucretius, and Plato. This led to a renewed interest in all things pagan, which in turn sparked an interest in the ancient marble sculptures and ruins that were constantly being dug up at building sites, and in farmer’s fields. The mounting evidence made it clear that these ancient civilizations were way more advanced then was previously imagined, and shockingly, way more advanced than 15th century Europe. This challenged the monolithic hegemony of the Catholic Church, and ushered in the era we now call the Renaissance. It also ended what Petrarch famously called ‘The Dark Ages’, and set in motion a tremendous interest in science, history, and the arts. It led to Humanism, and the Scientific Method. And that path led eventually, inexorably, towards the Industrial Revolution.

By the time we made it there, two technological advances in particular ended up having a profound influence on art making. And the first, perhaps ironically, is the idea of ‘de-skilling’. It sounds ironic, because when one thinks of advanced technology, one might think that more skills would be necessary, rather than less. But for the vast majority of workers, the opposite was in fact the case. De-skilling was really the outcome of no longer needing to understand each and every step required to make something from beginning to end. Just knowing a part of the process was enough. As a result, factory work required far less skill then the artisan system it replaced, and this was very much by design - the training required to do the job was shorter, and the resulting product was cheaper.

The second, vastly different advance was photography, a revolutionary technology that profoundly challenged the goal of painterly verisimilitude, which at that point, had been the goal for centuries. With the simple press of a button, total mimetic reality was available quickly and cheaply. This forced painters, and even sculptors, to re-calibrate what it was that they did – their goal of faithfully representing reality had suddenly become profoundly undermined. Reassessing the goals of painting and sculpture became inevitable, and it ushered in an investigation into the nature of art itself.

Both of these advances had a tremendous impact on art making in the 1860’s and 70’s, and both played a role in formulating the modernist era in art. Paul Cezanne was probably the first artist to fully embrace this shift. Others, like Eduard Manet, had understood it, but Cezanne embodied it. His work made use of a very de-skilled approach, to make the point that the idea of pure mimetic representation was no longer necessary. Along the way, the idea of de-skilling came to mean something in an art historical context that diverged from it’s industrial revolution origins; it came to signify art breaking with the past, breaking with a decorative tradition, and claiming autonomy for itself. As the theorist John Roberts has written: “The notion of art as embedded in a prevailing set of technical and social relations – and that art reproduces, subverts or resists – was particularly acute as an issue in the second half of nineteenth century in Europe.” He goes on to add: “This is the moment – as artists begin to define their interests in open opposition to the academy and salon – when a gap opens up between art as a bourgeois profession – like law or medicine – and its nascent, undefined, unofficial social role as a critic of bourgeois culture.”

So that is the third profound change – art as a resistor, and critic. Today we have so fully digested the idea of the artist as critic, that we forget how new an idea that is. Artists now are seen as intellectuals first, and craftsmen second, if at all.

This really describes the basic lay of the land for art and artist in the last 150 years. A lot has certainly changed, but de-skilling, non-mimetic representation, and social critique still define the cornerstones of how artists think and operate today. That means the artistic identity we recognize as our own remains rooted in the 19th century. The question becomes whether or not that basic approach is still applicable in the 21st.

The answer to that would be a resounding yes…if things had stayed the same. But they didn’t. The industrial revolution gave way to the digital revolution, and modernism gave way to post modernism. Art did what it always does, and mirrored those changes in real time. The results are still being felt, and in some cases, still being digested, but the environment that artists operate in today are starkly different than they were for David, or Ingres, to say nothing of Picasso, or Pollock.

The first thing a pre-modernist artist would notice if they stepped in a time machine is the way that art in our current era has shifted away from being mostly public, in the form of commissions from the church, the government, and private patrons to decorate chapels and so on, to mostly private, where galleries sell art to private collectors. That shift turns out to have political implications that would surprise our time traveler: In its blue-collar, decorative incarnation, art was elitist in terms of its price, (Art has always been made mostly for the rich) but not in its conceptual underpinnings. Now, because artists are making art for a specific audience in mind, an audience versed in the academic, white-collar version of art, it’s elitist in both.

Art was a decorative trade remember, which meant that its bread and butter were bible scenes, and portraits, images and narratives that were broadly understood. Art today is really a product of an academic environment, rather than a master/apprentice environment. This has logically created a genre based on things that would mystify a Renaissance painter. Today, an infusion of philosophy, sociology, history and politics, to say nothing of pure self-expression, is just part of the process - not in the background understand, but right up front. Individual styles, and individual voices are no longer idiosyncratic outliers, but very much the norm. Making pretty pictures might sound absurd to us now, but we seem quite content to make things that self-consciously embody what we perceive to be meaning instead.

We should think about that for a minute: artists today set out with the wildly ambitious goal of making things of cultural significance, which is a pretty high bar to clear. But even more incredibly, we seem pretty convinced that we clear that bar all the time. Why else would we hire guards to keep people’s hands off of it in all of our new contemporary art museums? Why else would people pay millions of dollars for it at galleries, and at auctions? Why else would there be a whole army of conservators, tasked with figuring out ways to protect and preserve it? It must really be significant!

But significant for who? The rise of the contemporary art museum coincides with a decline in their attendance, and the decline in Internet searches about them. The art world itself is a shrinking – less sales, less galleries. So it’s reasonable to ask at this point, which of these two systems is preferable: the self-consciously meaningful one, or the self-consciously good-looking one?

Artists are all very familiar with this choice – serious shit vs. artsy-craftsy shit. Modernism itself broke that down even further, into a more nuanced kind of turf war. (Conceptual artists vs. potters vs. glass blowers vs. performance artists vs. painters vs. photographers vs. social practice artists vs. pop surrealists vs. relational aesthetics vs. street artists vs. wood carvers vs. zzzzzzz).

And while artists slugged it out for much of the last century, modernism itself seemed not to notice. Remember that modernism wasn’t an art movement – it was the belief in a progression forward based on the application of reason, and logic, towards a better future. So in the face squabbling artists, it found a solution: it replaced them all.

Well not the artists exactly…there are still a shit ton of them around, on a little island all their own called the Art world. Maybe more accurately, modernism replaced what they did, with a version that was cheaper. So when artists poked their heads up from bashing each other with Cubo-Futurism this, Abstract Expressionism that, they found that one ‘Ism’ had risen up and won the war: Industrialism.

Industrialism figured out that everybody wants a version of art – not just the super rich buying a rarefied version of it from art stars. And that conclusion would seem to support the claim that the stuff we can buy at the store, or the stuff we can see on Netflix truly is the replacement part for any kind of antiquated artisanal approach.

For a century and a half, artists were concerned with what art looked like, while during the same span of time, industrialism was concerned with making an effective product. Need some decorative objects? Factories can make that. Want objects and experiences that contain meaning? Yup, same. We would like to think that our most precious cultural artifacts are in galleries and museums, but actually, they are on T.V., the Internet, the radio, or in products like shoes, cars, and clothes. What this means is that the two roles that art has traditionally defined for itself, that of decorator, and that of meaning codifier, have been subsumed completely, and very efficiently, by de-skilled, industrial means.

Almost completely. Here it is important to understand just how far away from ideal that almost signifies. More stuff and more shared experiences have not in themselves replaced art or artists. There is a gap between what our modern world provides, and what our ancient human selves require, and most of us can sense the space between. What that means for artists today is that adhering to a particular approach, or genre, or media isn’t really that important – providing the missing piece of humanity that industrialism can’t has become key.

Artists today are competing with movies and television and Netflix, and phone apps, and sneaker designs, and Instagram, and a whole lot else that industrialism is throwing at them. Industrialism has proven to be pretty efficient at everything it does. So how can such a marginalized activity compete with that, much less achieve any kind of cultural relevance?

A better question might be with whether or not artists are marginalized enough. That’s because the ability to see what’s missing means standing outside just far enough to have a clear view of the big picture. That’s why culture comes so frequently from outside the center, and spreads inward. Or from the bottom, and ascends to the top. To that end, it’s worth considering the toolbox that our academically trained, white collar artists still rely on today, born out of a response to the Industrial Revolution - de-skilling, non-mimetic representation, and social critique – does all of that still operate as advertised?

De-skilling in particular looks pretty suspect here. That’s because de-skilling, born more than a century ago as a means of pulling the rug out from art being mere decoration for the rich, has itself become decoration for the rich. By now de-skilled art is the expectation, rather than the party crasher. That’s why de-skilled art defines, without question, academic and commercial orthodoxy – take a quick look at the artists at any Documenta say, or Biennial, and it’s soon clear that the de-skilling crowd, using an approach only available to them post 1870, have come to dominate the art world.

But maybe it’s still a relevant approach - after all, first we were ‘De-skilled’ by machines, next we were ‘Disrupted’ by computers, and then ‘Downsized’ by automation – doesn’t all of that point towards de-skilled art as being the most relevant embodiment of our modern world?

It certainly might be the best example of it. But using de-skilling to point out the way things are de-skilled seems a bit like smoking to talk about cancer, or burning coal to talk about global warming, or littering to talk about littering – it represents of the very thing it protests. And what it protests (or should I say, used to protest) is the expectation that art was here to please, rather than to critique. Now of course, artists try to please via critique, and a lot of the time that’s done via de-skilling.

Meanwhile, almost two centuries into the industrial revolution, we have been infantilized by a system that makes us into consumers who buy things, rather than people who know how to do things. Which is great if you’re rich (and can’t be bothered to ponder the consequences) and lousy if you aren’t. Either way, de-skilling as an aesthetic choice is one that can really only be made by the rich – everyone else has to make things actually work. Whether it’s raising food, or making shelter, or making clothes, all of it requires people with skills using their hands to perform the job. In a world where most people are poor, and most lives are therefore dictated by primary skills, making things becomes a political act, one that disrupts the consumer/infantilization model so preferred and perfected in our digital age.

And speaking of digital, that brings us to our next 20th century invention, photography, which is now almost entirely a digital medium. Remember that photography arrived on the scene as a kind of aesthetic truth teller, accurately representing reality, or so it thought. But it didn’t take the digital era long to make that idea a thing of the past. Any attempt at a one-to-one representation of reality is now a quaint artifact from a pre-Photoshop era, which is why one of the common themes of contemporary photography is to question the veracity of the photographic image itself. Everything requires interpretation, and mimetic representation reveals that bald fact as well as anything, because any version of Truth that we can all agree on is impossible to achieve.

That’s why there was a turn towards abstraction. The Abstract Expressionists really thought they were on to something when they decided that their paintings represented nothing but the paint on the canvas. They set out to defeat a mimetic impulse by simply being the thing, rather than representing the thing.  By now though, a 100 years or so into the abstract art experiment, the act of seeing an abstract painting makes us think about other abstract paintings, rather than about interpreting experience. Abstract painting has become less of a short cut to reality, and more about a particular historical reference.

Today, choosing to make imagery that utilizes abstraction, or self-consciously rejects technique, doesn’t make something art as much as it makes it modernist-inspired art, which in turn inspires a specific, historically anchored line of interpretation. Meanwhile, any attempt at mimetic representation just about entirely sidesteps all of the basic tenants of Modernism in one go – which is important if the art that’s being made isn’t modernist.

But what about the social-critique function of art?  That must certainly have value. And it does - literally – it’s collectable! And it’s worth pondering the deep irony of that. It’s safe to say that at this point, we have found that every attempt at aesthetic rule breaking or cultural critique attempted by the modernists, all ended up on collectors walls. It’s turned out that the art world could put a price tag on rebellion itself, and the collectors who bought those things could feel a little bit like rebels themselves. The idea that art was (and is) a purely intellectual investigation existing apart from the apparatus set up to disseminate it, refusing to play the role of the decorative object, just never happened. Like de-skilling, the approach of the intellectual over the decorative transitioned from radical fringe, to comfortable orthodoxy. Now it’s various descendents populate art fairs.

Last is a note on the romantic, which the trade of art, and the discipline of craft have long been describes as being. And they certainly can be. But there is an important distinction to be made here between ‘amateur’ and ‘professional’ – amateurs are romantics. Professionals, confronted with bills and deadlines, very rarely are. This distinction separates those who actually make their living at something, and those that don’t.  That means that it’s quite possible to be a very romantic contemporary artist, making abstract, or academically inspired art. In fact, it’s more often the case than not, because most of the time, those people aren’t making a living at it. Meanwhile, the sheer pragmatism of having to make a living at something really changes the relationship to the work. It makes it a whole lot less precious, and a whole lot more accommodating to an audience.

Art used to be a trade, and then it became an intellectual pursuit. As each of these two poles reached a kind of purity of purpose and definition, each became a parody of its original intent. It got cartoonishly overwrought in the first case, and melodramatically self-important in the second. Into that breach came an industrial and economic might that made the argument essentially moot, because it provided most of what the artists were trying to do, far more cheaply and efficiently than they could. The result was to alter the internal debate about what constitutes art itself, from decoration, to ideas, to mechanized fulfillment. Artists are left to show us what that debate left out.

As Carlo Scarpa was fond of saying: Verum Ipsum Factum, which can be translated as ‘We only know what we make’, or ‘Truth through making’. As the dust settles on the 20th century, and we struggle to make sense of what comes next, we would do well to start with that.

 

 

 

 

 

Monday
Feb272017

Moon Shot

I was in my third year of art school making perfectly awful paintings, like I had the year before and the year before that, when I had a simple idea: to put an object in one of them. My thinking went something like this: if I put something good on top of something not so good, that will make the whole thing slightly more good. Or slightly less bad, depending on your perspective.

This isn’t as crazy as it sounds. I liked making things, and I thought that adding something I liked would be a good start. At this point an observant reader might wonder why someone interested in object making would try and become a painter in the first place – a square peg in a round hole and all that. But logic was never part of the equation. My skeletal understanding of art had somehow made painting it’s true representative, which meant that if painting wasn’t for me than very probably art wasn’t for me. I needed a spark, a miracle, a moon shot, and little did I know that my moon shot was about to come in the form of a smallish carved head.

Before I get to that, it’s probably clear that I was misinformed about a couple of things when I went to art school. The first one obviously being that painting was the true story of art. But the second one was even more deeply embedded in me, and it’s what I now think of as the Myth Of The Artist. You probably know the one. It’s the perception that artists are the irascible, (but lovable!) creative firebrands, who inevitably give society fits, even as they expand our perceptions of the world. I didn’t know much about art, but I thought I knew about artists, and artists were a bunch of rule-breaking libertines. Fuck yea! I wanted to be one too. What I found out when I got to school however, was that the above description really described a modernist artist, when art was about art, and pushing boundaries was the thing. By the time that I got to school however, all of the boundaries had gotten pushed already.That meant that the firebrands were out of a job.

If the task of modernist artists was to seek out the cutting edge of art, then what do you do when there are no edges anymore? What you do is realize that modernism is over, that’s what. That meant that the postmodernists took up the reins with all of the enthusiasm of a clean up crew after a really good party. The modernists had such a clear mission, such a purpose. The postmodernists didn’t. So what was the way forward for us?

It was around this time that I remember going to a Dave Hickey lecture, and during the Q & A immediately following, a young art student asked him what he thought the next thing in art was going to be. He looked off into space for a few seconds. Then he said slowly, “Well, all the easy stuff has been done…”

To which he could have added “And all the hard stuff has been too”. But his point was clearly understood. Art making had gone from being technically rigorous, and decorative, to intellectually rigorous, and very self-consciously non-decorative. Which is to say, it was mostly devoid of special technical skills. Art’s former life as eye candy was tossed, in favor of a far more intellectual approach. People began to refer to De-skilling not only as a philosophical starting point, but also as an aesthetic goal. Other terms like ‘Non-objective’, and ‘Conceptual’ were used and thought of in similar ways. The old school craft associations were strenuously pushed away, making words like ‘Craft’ and ‘Mastery’ sound laughably retrograde. The pendulum had swung in a complete arc, from one pole to another.

There is a funny thing about revolutions though. No matter how they start, they all end the same: the radical fringe becomes the new normal. After awhile, the new normal starts to seem like the old normal – the old Hegelian model of thesis-antithesis-synthesis. And so it was with Modernism. The academic hierarchy and monetary excess that made the Beaux Art possible, quickly adjusted to their new modernist equivalents. Now of course, our new academy is bigger than ever, the gatekeepers more credentialed and numerous, and the prices paid for the most expensive art enough to make any pope or king blush.

Aesthetically we fell in line too. The new orthodoxy that modernism instilled was to seek out originality at all costs, a paradox that seemed to bother no one. (Didn’t catch it? Orthodoxy and originality in the same sentence is conceptual whiplash). In fact, Modernism was so broadly successful, and for so long, that soon enough, ‘new and improved’ was a way of life. Those very words in fact began to appear on pretty much every product out there, with a metronomic regularity, and CEO’s started dropping terms like ‘disruption’ and ‘creative destruction” all the time, further reinforcing the idea that constant and propulsive change was the highest good.

There is an irony in the idea of forcing change to happen though, which is this: it happens whether we want it to or not. Change is the one thing that is unstoppable, and requires no special prodding from us. Imagine, for example, exerting a great amount of effort to try and get the tide to turn more quickly – even if we were successful at doing it, which would definitely be an achievement; the result would be redundant at best. The one thing that modernism brought us with relentless efficiency was a constantly changing tide. That meant more of everything, faster – a lot of good stuff to be sure, but also plenty of useless and disposable crap right along with it. Suddenly, new and improved didn’t feel like it was either, and the thought of getting disrupted, much less creatively destroyed, just didn’t sound so good.

Consider this: when I was a kid, we couldn’t wait for the future to come. It was going to be the logical fulfillment of human possibility, and it was all so achingly, agonizingly close. Now the future seems to fill us with dread, and a surprisingly deep sense of nostalgia has replaced that once pervasive sense of possibility. Modernism may have institutionalized revolution itself, and in so doing turned freedom and rebellion into a brand, but the shift to postmodernism made it clear how tainted, and filled with dystopian overtones that approach had left us in the end.

My brain fogged over when I thought of this stuff. All I was trying to do was make something halfway decent in art school, and school had responded by telling me that shit was complicated. A certain amount of paralysis by analysis set in, and like a lot of younger artists who see the big picture for the first time, I became creatively constipated. Postmodernism, that vague philosophical construct, had all of a sudden became personal.

So how to break the log-jam? Dumb luck, that’s how.

It’s a curious fact that inspiration only seems like inspiration in retrospect. At the time it occurs, it can feel a whole lot more like just grasping at the very last straw out there. For me that last straw was buried under some clamps in the tool room in sculpture class. It was some carving tools that I had unearthed mostly by accident, and seeing them there made me realize that I wanted to try my hand at making something with them. I was lucky that I had a teacher named Ed Wicklander who carved from time to time, because seeing his carvings had uncorked a secret desire in myself that I hadn’t even really known was there.

My reluctance to give into my impulse to carve probably had something to do with the fact that what initially inspired me about it was how it operated independently of sculpture, or maybe even art. It just seemed more like architecture. In Europe I had seen the way that entire cities were carved from stone, but not just into simple blocks stacked on top of each other, but into crazy, interlocking organic shapes, figures, gargoyles, patterns, and so on. This was utterly foreign to me as a kid growing up on the West Coast of the US, surrounded by pressure treated wood decks, vinyl siding, and generally crap construction that wasn’t meant to last. I found it impossible not to become enraptured looking at structures that refuted that familiar clap-trap disposable premise in every way. Any doorway, or window, or fence was an opportunity for a craftsperson to go bananas, to one-up all the other doors and windows and fences out there.

What was even more amazing, aside from the technical wizardry, was the sheer practicality of it, the basic problem solving, brilliantly realized. Great effort was expended to make sure that the windows didn’t leak, that the buildings could withstand fire, and that the stairs didn’t creak, all with incredible flair. And all of it was done by what came to be my favorite artist: The Anonymous Craftsman. Yes, I know some of their names now, but whether or not they were remembered or forgotten, the training and attitude of all of these carvers seemed the same: to make things as good as possible. The way all of that effort and love added up to this incredible tapestry of stuff, meant that simply walking down a street became an adventure. It was hard not to come away without feeling energized, and filled with a sense of possibility. Every surface of every building seemed to be asking the same question:  What if we all just tried to make the coolest shit imaginable?

As I looked at the motley collection of carving tools in the shop, I thought that I too could add to that tapestry. I could almost see, and smell, the fresh wood shavings falling like petals all around me, as I brought forth sensual forms from the wood. Gripped by this image, a Zen-like spiritual reverie seemed to descend, the very same reverie that was certainly the default work mode of every wood carver out there. Incredible! I also wasn’t mad at the prospect of not having to share the tools with anyone, because for some reason nobody else in my sculpture class seemed interested at all in carving. Again: incredible. I smirked to myself inwardly, I just had stumbled on a sure fire way to crack open the art game in a way that nobody else out there saw coming. My journey to artistic relevance had begun.

The first clue that my plan had some flaws was when I began to actually use those dull and forlorn carving tools. There was so much pushing and pounding and cajoling and wincing and wheezing involved, that I truly wondered if they were bought from a practical joke store, like where you get fake poop and fake vomit. Maybe these were fake chisels, bought by the shop techs to punk little sculpture kids like me. But I was forced to conclude that they had actual metal blades, and actual hardwood handles, which I had to admit, made them real. As I looked around at all of my fellow students, (none of whom were carving, and had absolutely no plans on doing so), I was also forced to conclude something that should have been as obvious as a flashing neon sign: carving was hard.

It was about then that all of my idealized visions of carving quickly crumbled. I experienced for the first time what I now call the ‘Two P’s’: pain and panic. New carvers bump up against these two devilish hurdles without fail. The pain part is obvious; it starts with the hands, and works its way up to the wrists, the elbows, the biceps, the shoulders, and finally down the back and up the neck. Getting cut is part of the pain equation as well, but except for a few spectacular whacks, that always takes a back seat to constant muscle and joint pain. Meanwhile, the second P (panic) is the more surprising of the two, and therefore the more mentally crippling. It is the feeling that all of the time and effort being spent is all for nothing, that the piece being worked on is not turning out, or that it’s a bad idea in the first place. This slow seepage of doubt, and dread can be an abyss that claims your soul. No other medium I’ve worked with has an equivalent.

Experiencing the full force of the Two P’s didn’t happen right away  - it never does. It’s always on the second or third day, after a real commitment has been made, before it sneaks up and wallops you. By that time, Pee Pee mode is in full flower. Years later, I would actually design a Panic Tamper carved from the hardest wood, as a talismanic object designed to offset the Pee Pee. It basically rams down one’s gullet, forcing the panic into a hard nugget that lodges in the gut. This was one of a number of talismanic objects I designed, that every wood worker out there wishes were real: the board stretcher, the joinery healer, the self-sharpening blade. If that panic tamper was real, I would have worn it out by the end of my first year of carving.

That’s right, I did keep carving. You do know how the story ends – pretty much all the work I’ve done since then is carving. In fact, after I finish writing this on my computer, I am going to go carve. Most of my days, and a lot of my thinking, involves carving to a very large degree.

So the question is, why did I keep going? What made that initial experience, filled with frustration and failure, ultimately feel like success? And I have to say that it had to do with re-adjusting my relationship with success itself. Before I started carving, I had always assumed that I would succeed at whatever I was doing, and that it would be relatively easy. Carving eliminated the prospect of making those kinds of assumptions. I was only going to get out of it what I put into it, and that in order to do that, I needed to adjust my interior clock from instant gratification, to something that just might take awhile.

This is a far cry from eliminating a sense of optimism understand; if anything, I saw the potential to make things that actually were good, in a way I never had before. Carving seemed to say “check your entitled shit at the door – you might spend a long time on something that totally sucks”. The pace inspired a slow but steady conversation with it - I realized that I was making the occasional suggestion, rather than forcing my will upon it. This upended how I was used to making things, where I started with an idea, and then expended just enough effort to illustrate that idea, which in effect became stenography, or a form of dictation. Carving, in contrast to that, was a relationship, where I could see possibilities, rather than a singular forgone conclusion that I could impose. The resulting piece was at best a surprise. It wasn’t exactly what I had imagined; it was something else entirely, like it was made by someone else entirely. I wanted to know about this thing, to study it - my original idea had splintered into various avenues that lead to other places, which I now saw in a beckoning, fragmentary form. I wanted to see more.

It’s incredible that so much came out of a very rudimentary little portrait of a bald man (I skipped carving the hair, because that seemed too hard), especially because the final result was so bloody awful. Despite that, I still felt an odd sense of accomplishment, tempered a bit by the crap outcome naturally, but fueled by an inescapable sense of possibility. The pragmatist in me split the difference between this mix of emotions by not throwing my little carving away, but instead lighting it on fire until it was a barely recognizable lump of charcoal. I then dangled it in front of my painting, and something magical happened: the painting got better. It bumped up to maybe even being pretty good. And so it was that I saw the future, which was carving, and the past, which was painting. The little lump of charcoal seemed to egg me on; I knew I could do better, that I would do better. I felt pulled forward, into this new place, a place I’m still exploring even now.

Art is hard, under the best of circumstances. It’s probably out of reach for most of us entirely. Carving, which is my chosen medium now, seems to break that bald fact to me everyday, which I appreciate. But getting better at it makes me think less about making Art with a capitol ‘A’, and more about just being better at things – carving, sure, but also baking bread, writing, friendship, really almost anything. The moon shot might not have gotten me all the way to the moon, but it sure widened the view.

 

 

 

 

Monday
Nov142016

How Long It Took (Divide By Why)

I had a visitor in my shop not that long ago, talking about this and that. He ended up staying for a bit to watch me carve. It didn’t take very long before I could tell he was agitated about something. Finally, in a low conspiratorial tone he said, ‘you know you can get a robot to do that right?’

Maybe… I’m not so sure. The digitally carved work I’ve seen actually looks pretty limited – no under cuts, no incised lines to flute cuts, no thin parts, and certainly no way to improvise on the fly. Much of it has a lumpy, yet regimented quality (which is as odd as it sounds) when the pieces are un-sanded. And when they are sanded they look like bars of soap that have been left in the shower for a month. I’m not saying that the technology out there won’t catch up to me sooner or later, but for now, it’s not even that close.

What struck me about this visitor mentioning the robot thing wasn’t his lack of sophistication about the technology though - hell, maybe he was right. It was more that there was a clear desire to help me overcome the sheer inefficiency, or even drudgery, of what I was doing. This realization got me thinking. There have been lots of people who have asked me how long a particular piece took to make, and a lot of the time I found the question to be far from interesting. But now I realized that what they were really asking had less to do with how, and more to do with why. My studio visitor, conjuring his non-existent technological short cut, was in essence asking much the same thing.

Why, as opposed to how long, is a perfectly fair question of course. There is a deeply held aversion towards needless labor that’s probably old as humanity itself, and if anything, that aversion has only increased in our automated present. With that in mind, it seems clear that a very large part of what I’m doing has to do with labor itself, and that inefficiency and drudgery are both intrinsic to the conceptual starting point of the work. This is in response to the contemporary world, which has put so much value on exactly the opposite approach, by prizing efficiency over pretty much everything else. That in turn has had the unfortunate consequence of making things like beauty and durability sound like unreasonable extras. My work posits the idea that they are actually essential. And as odd as it sounds to most contemporary ears, inefficiency and drudgery are an excellent way to accomplish that. Inefficiency, for example, ambushes rote repetition in favor of surprises, and improvisation. And drudgery makes clear that spectacular results don’t happen by stringing together spectacular moments – the lows exist there right along side the highs.

My interior reverie didn’t end there though. The other thought that occurred to me was to consider a world in which the carving I had just made could in fact be carved by a robot. Would that be a win for humanity? And what exactly is our relationship to technology in the first place? At what point do our lives become so distant from the world via technology, that it actually makes us worse off, rather than better?

It’s not exactly a new thing to think about. People have been asking that question since the dawn of the industrial revolution. The Luddites became famous as the machine breakers in the early 1800’s – destroying the mechanized looms that they felt were taking away their livelihoods. William Morris followed that in the 1860’s, when he starting the arts and crafts movement, based on a his very critical assessment of the industrial revolution. And so began, in fits and starts, a long line of proto hippies, which progressed until the actual hippies showed up, in the 1960’s. Now we have all the locavores, and slow food enthusiasts, the slow fashion movement, the small house movement, and the Etsy crafters, among a whole lot else.  

There have been doubters about technological progress from the beginning, despite the fact that they tended to be the minority, and despite the fact that they tended to be viewed as a little nuts. Which is the way a lot of movements start, if you think about it. Suffragettes, or civil rights advocates, or feminists, or environmentalists, all seemed out of step, until the day came when they didn’t. Whether we are to that point with technology yet, as always, is an open question.

What is pretty clear though, is that spinning a dystopian bummer about the evils of our current technological moment, with all of our gadgetry, and fractured attention spans, would be an easy thing to do. But I’m not going to do it. Because at this point, new technology has to play a role in un-fucking up a lot of what our old (and current) technological fuck-ups have wrought. Food production, clean energy, clean water, environmental issues, medical breakthroughs, transportation, among a whole lot else, need to be solved in order for billions of us to avoid the catastrophe we certainly face. We are too far down that path to change it now.

What I’m doing by laboriously bashing on blocks of wood certainly talks about technology by way of its absence, but in my opinion that strategy isn’t a clear condemnation of it, far less a plea for us to return to caves.  It’s an acknowledgement that a physical relationship with the world is still absolutely essential, and so too is the human agency required to negotiate it. That frequently overlooked fact means that we will have to find a sustainable relationship with more than just what’s left of the natural world, we will also have to find a sustainable relationship with our technology. So far, we aren’t close to doing either.

Understand that my own work depends heavily on technology – I’m not carving large blocks of wood with my fingernails. True, my process requires no screens to peer into, and no set of digital instructions to program, but the technology involved in making what I make is still high nonetheless. Metallurgy, tool design, and even lighting and dust collection are all things that have relentlessly been improved on over many centuries, and I couldn’t do what I do without them. At the same time, I also couldn’t do what I do without understanding that the tools I use, powered or not, aren’t magic – they won’t make things appear without my hands knowing what to do. The short cut that tools represent is misinterpreted if we conflate ‘fast’ with ‘good’ and ‘fastest’ with ‘best’. We have plenty of evidence all around us that shows the unfortunate outcome of thinking that way.  

Not that I’m mad at efficiency, as a goal. I’m mad at efficiency replacing awesome as a goal. There is no question that technology has been really successful at shortening the distance between wanting something and getting it, but maybe a bit counter intuitively, that often puts awesome further out of reach. And it does so in a couple of unfortunate ways: the first is to convince us, quite logically, that taking on long term, challenging tasks is for suckers. And the second is to cut us off from how the basics of our lives come to exist. By now, we are hazy on how our food arrives in the grocery store, how our shelters are made, how our clothes are stitched, our phones assembled, because all of it just seems to appear like magic. What this means is that not only have we limited our ability to intersect with systems we don’t understand, we have also limited our desire to try. For people who face a lot of problems, that’s not a good place to start.

Making things posits another approach to this particular predicament, by trading away drift and paralysis for agency and action. It connects our own abilities, physical as well as mental, to our own ambition. It makes clear that acquiring a high level of agency means acquiring a high level of ability. And gaining agency over something is to unravel the power dynamic of the contemporary world, which is set up on the premise that if we willingly jettison a portion of our initiative, knowledge and power, then we will get our needs to met by a group of invisible others (inevitably browner, poorer, and far away) in return.

If the work starts with the idea that actions are freighted not only with political consequences, but also form a very reliable portrait of one’s values, then it’s clear that actions, far more than words, reveal us. So what actions do we choose? Carving might seem like an obscure outlier as far as actions go, but it’s actually deeply embedded into the human experience. In fact, it pre-dates the appearance of Homosapiens, by a lot. Our proto-human ancestors, Australopithecines, started carving at least two million years ago, and probably closer to three. For that reason, tool use is now thought to have inspired our brains to expand and adapt to our hands, rather than the other way around. Pretty much every culture, in every era, figured out how to carve. That means most of us relate to carving as an activity without relating it to a specific context, unlike performance art say, or painting. Chances are good that you saw something that was carved today – maybe in a church, a park, a public square, a cemetery, or maybe on a fire place mantel, a piece of furniture, on a building, or maybe even in a museum or gallery. All of this adds up to carving being a very good vehicle to demonstrate a tangible interaction with the world, via a very familiar method, in a very familiar material.

And a sidebar here about making: what I’m talking about is the opposite of romantic amateurism understand, where a gauzy nostalgia, and a studied lack of ability can often end up as art – maybe only end up as art  – because what else could such a thing be? No other sphere but art would accept making something poorly as a premise. And that ends up being a problem, because if there is a formula for making something look like art, then plenty of us will see that as enough. If there is a formula for eliminating a laborious search for art, then we'll stop looking. Our default route now is to find the short cut.

What I’m talking about instead is pretty much in the opposite direction to that, by working towards gaining knowledge, skill, and competence - clear thinking meeting clear ability, and using both to push each other forward. That requires discipline, following through on what you start, and doing something with love. Oh - and making it plain that love is work. Sound like a bumper sticker? Yeah, well that’s fine. The results of this particular approach may or may not be art, or contemporary art, or craft, or any hybrid thereof. I’ll leave it to others to name it. All I can do is persist in making sure that the work reflects me, that it reflects my values, my interests, and my concerns. If it does that, then I’ll name it successful.

That word concern really taps in to the last part about why I’m spending so much time laboriously making stuff. I’m concerned about responding to the world in some tangible way. The opposite of that of course is no response, no engagement. Which, for the nitpickers out there, might form a response of its own. But I’m not interested in that kind of apathy. To me, it’s as if I can hear the world in a low humming voice ask me if I have truly paid attention to it, truly seen it, heard it and felt it. (Or is that voice Bruce Nauman?) The work I do is trying to say yes to that, as respectfully as I can.

Thinking that way has made me realize that the reason I’m doing this is not because of all the contemporary art I saw as a kid – suffice to say that there wasn’t any in Anchorage Alaska, where I grew up. No, the reason I’m an artist is because I saw and experienced plenty of other things that most of the time didn’t call themselves art, but inspired me nonetheless. Things like native carvings, frontier architecture, airplanes, hippy crafts, the Northern Lights, and opera. (Yes, I said opera. I worked in the set shop for the small opera company in Anchorage). I was a voracious reader as well, and through the words of others, I saw views of the world that where starkly different than my own. This spectrum of stuff had a cumulative effect on me that can only be described as wonder.

The curious thing about wonder, as opposed to awe, or admiration, is that it not only draws one in, it also encourages participation at the same time. It’s the music that inspires the dance. Which is why the grab bag of stuff I was looking at seemed to call out for a response from me. It just didn’t seem crazy to think that I too had something to add to all I had seen and experienced, and that my participation might even be necessary. Wonder has that power.

Back in my studio, my visitor stood to leave, and my thought train came to a screeching halt. I hadn’t really answered him about the robot, probably because it would have caused more confusion than clarity. But if he were here right now, like you are, I would tell him that I’m content to participate in the way that I already am - with inefficiency and love, and drudgery and respect. I’ll skip the robot… but thanks for the tip.