Wednesday
Jun122013

Flip

Like probably every artist out there, my intention is simply to make the stuff that I would most like to see. For me that means making objects most of the time, because making objects strikes me as badass. Furthermore, it means carving a lot of those objects, because carving strikes me as being extra badass.

And that really about does it for any kind of artist statement. This is where I scroll down to Save File As… and get up to do other things. Except I can’t quite. My own impulse to make work seems clear enough, but talking about it always makes me wonder: Where are the other carvers? Try counting up carvers in a contemporary context- after about five it gets tough. Heroic googling will get you to around ten. This, despite the fact that there are an incredible amount of artists out there, making an incredible amount of stuff. It was never my intention to get out of the way of pretty much everything else happening in the art world, but the practical result of being a carver does exactly that. So it was with a mixture of curiosity, and trepidation, that I began asking myself more specifically why I have made the choices that I have made. And why (maybe), other people haven’t. Which leads me to what follows below, the addendum, the footnote, the epilogue, or something or other.

Before I go further I should probably define my terms here. Carving itself is very much a living thing, in almost every cultural tradition, except for the one that I primarily draw from. That would be the one that started in Greece thousands of years ago, and came to define Western civilization. To be sure, no form of carving is especially well represented in the contemporary art world, (sorry Native American carvers, Asian carvers, African carvers, chainsaw carvers, Aboriginal carvers, Eskimo carvers, etc. you are all awesome) but the style that I’m talking about, the one that included practitioners like Praxiteles, Michelangelo, Riemenshneider, and others in an unbroken chain that lasted for centuries, is especially and conspicuously dead. It died, swiftly and totally, at the end of the Beaux Arts movement. How swift, and how total? Look at the tops of the four columns at the entrance to the Metropolitan Museum of Art in New York. Raw blocks of stone lay in piles atop the carved columns. The four carvings that Robert Morris Hunt had intended Karl Bitters to do sit unfinished- he was run over by a car and killed in 1915. No one was found to replace him, and to this day the stone remains untouched.

The Beaux-Arts movement limped on from there, alive but slowly losing steam. The Lincoln Memorial was built in 1922, Riverside church was dedicated in 1930, and Mount Rushmore in 1941. But in 1945, the Piccarelli brothers, carvers of not only Lincoln’s Memorial, (from 28 pieces of white Georgia marble, weighing 159 tons), as well as the lions in front of the New York Public Library, the frieze on the New York Stock Exchange, the figures on the arch in Union square, quietly closed their doors for good. Really at that point, it was all over.

That didn’t exactly kill off sculpture, much less art in general. In fact the rise of the modernist era cranked up virtually every other artistic activity imaginable. The art world today is inclusive of everything, everywhere, all at once, a profusion of art coming at us from every corner of the globe. How curious then that there are a few traits that link so many of these disparate activities together- that in an incredible defiance of the odds and the numbers, most artists have signed on to just a few starting points for their work that are nearly universal. How curious also that carving in one way or another undermines or even defies them.

How exactly does all the art of today relate to each other? The first link is that it is overwhelmingly conceptually based. That is, driven by ideas, over any particular medium or method. This point logically leads to the next; if a medium or method is not really primary to an art object, then most artists will understandably reject the idea of becoming technically proficient in any one of them. In other words- an art object (or experience) should be just facile enough to get the idea across, and no more. It should never ever be confused with craft.

The description above has become the default way to think of contemporary art, essentially moving past fetishizing mute handsome objects. Meanwhile, the modernist creed to “Make It New’ envelops us totally, so totally that rarely do we notice that it is, well, old. For a century at least, we have been acclimated to look for, and indeed are most comfortable with, art that raise doubts about whether or not it even is art. Along the way a particular philosophical framework- that the idea comes first, and the hand is removed- has become the foundation of a new tradition. Conversely, artists making things at a high level of tactile mastery (minus the army of fabricators), and following the work in a way that is unconcerned with self-expression or even particular ideas, represents an older one. But no matter how you slice it, in the postmodern world it has become impossible for artists today to work without a tradition.

Most of what I see in galleries and museums exhibits a nod towards both an idea, and a studied lack of craft, as well as a third trait I’ll get to in a minute. So much so that now the overwhelming majority of things I see have come to define what I would call Traditional Contemporary Art. In this definition, people like Elliot Hundley, or Rachel Harrison, or Urs Fisher, or Harrell Fletcher, or Jeff Koons are really representing the norm, rather than the fringe. A quick peek at their long resumes, blue chip gallery affiliations, and high auction prices will attest to that. Furthermore, they and many artists like them, are doing their particular gig so well that any desire I might have to try add to that genre is pointless. Kudos to them! I love a lot of that work. But I’m interested in making the things I’m not seeing, not adding to the things I am. And I’m seeing an awful lot of what they are doing.

Despite the above description, it’s a highly ironic fact that the definition of orthodoxy within the art world still stubbornly sticks to something like carving. But the reality is that carved things haven’t come close to being in the mainstream of contemporary art for at least a century, especially if they are technically proficient. The definition of orthodoxy has flipped.

In the meantime, tradition has become inescapable. Finding a creative act that is without precedent is not only impossible, but searching for it in the first place seems retrograde, a modernist task in a postmodern world. Congratulations and all that in the unlikely event that you find it, (a creative act without precedent), but it’s not like it’s the main task of art to look for it anymore. Art has no main task. Precedent for everything means ‘Tradition’ is something that is impossible to operate apart from. And that’s good. But ‘Tradition’ is used as a pejorative in the contemporary art world so often that it’s easy to lose sight of the fact that it simply refers to something that is readable in a communal way. In order for traditions to have any meaning at all, they must rely on the context of community. In fact, traditions are one of the ways communities show their contextual boundaries; they are a display of the very binds that tie them together.

The opposite of tradition is ‘Freedom’, which is the unraveling of those communal bonds in order to gain autonomy. Taken to its logical conclusion, there is a relationship between freedom and alienation, due to the fact that fully unraveling those bonds means being unreadable by anyone at all. Being totally bound is no more preferable than being totally free of course, so it would seem logical to chart a course that moves in a nuanced way between both poles. Which is why art at its best it seems to be both strange and strangely familiar, allowing for discovery, and connection simultaneously. The ability for art to operate in this way might at first seem like a paradox, but only if one discounts the fact that new things come out of old things rearranged and reconfigured. This is why traditions are therefore alive: they reflect their moment, while acknowledging the steps taken to get there.

Acknowledging tradition is to acknowledge that for many centuries, people made things with a high level of skill. Despite that fact, craft and skill in any pre-modernist form didn’t come up much when I was in school, other than as a historical artifact. And while I really didn’t think about it much at the time, its very absence was instructive. It was really only after I left school that an observation about my experience within it began to creep up on me, which is simply this: crafting an object takes the attention away from the artist, and puts it on the object. Conceptualizing an object (or a performance, video, intervention, social practice, soundscape, etc.), on the other hand takes the attention away from the object, and puts it on the artist. The first was deemed bad. The second was deemed good.

How can conceptualizing an object put the attention on the artist rather than the idea? Because a conceptual artist at some point needs to be present to explicate what the idea is in the first place. Thus, even though the Concept of the art supposedly shines through more clearly minus the hand of the artist, their face and voice and personality appear instead, due to the fact that some form of salesmanship is required for that particular thing to be understood. Which leads to the third major trait I see in work today- the way that the ‘concept’ can be elastic enough to encompass merely the flavor of a personality. This is Conceptual 2.0, where it is enough that the concept is a compelling story or entertaining explanation. This is why ours is the era of the Art Opening, and the Artist Talk – how many artist talks do you suppose Picasso did, or Manet, or even Duchamp? This is how we’ve come to focus our attention on artists talking about ideas rather than simply on objects that embody them. Objects have receded to the background, as artists have come to the fore, and that flip has come to define most art in a contemporary context.

This reality elides over a very important question though: are artists and ideas really the most important elements of art? If so, then art has moved into an area that seems to be a hybrid of philosophy and show business. Which actually sounds kind of interesting. But to accept this state of affairs means assuming that the intellect is the most valuable part of us, the most reliable part, the part that we should elevate above anything else. The best art (in this calculation) will always be the smartest art, which leaves out art made with joy, or sadness, or anger, or boredom. It leaves out intuitive art, or beautiful art, or ugly art, or process art, or tactile art, or interactive art, or crafted art, or outsider art, and so on. In other words, all the stuff that starts without a heaping dose of the intellect.

Starting without it doesn’t mean that it doesn’t pick some of it up along the way of course. In fact, it invariably takes on a more unpredictable side of it. So rather than illustrating ideas that are already conceived in the mind of the artist, starting without them allows room for discovery, for challenging assumptions, for serendipity and surprise. It allows for following a path that isn’t really clear. It allows in other words, for exploration.

Starting without fixed ideas means starting without the mind isolated from the rest of us- it accepts the body, the hands, and the senses, as equal partners. Including them brings back the possibility of making things. In fact, it almost demands it. And as the artist begins the process of making something, a human activity with at least a two million year history, a further observation is easy to make: if the body and the hands act as the guide, then they better be capable. The sheer discipline of making something becomes an avenue of discovery, a way of creating something the mind can’t exactly plan, or explain. That comes in after, the explanations, the questions, the criticism. And when it does, the physical and mental work together again to refine whatever was made, iteration after iteration, creating something greater than each one alone would be capable of.

The argument against working like that of course is that the hands can take over. And it is true that becoming highly skilled at something and leaving it at that does have some real problems attached, problems that require some understanding on the part of the artist to avoid. Simply put, it’s that craft in and of itself isn’t really much of anything. Learning it, like a writer learning the rules of grammar, is a good thing. But learning it and then thinking you’re a great writer is a creepily misguided thing. Many of us have met artists who can’t tell the difference, and have understandably decided to run in the other direction.

The problem is exacerbated when it’s noted, with perfect historical accuracy, that we are now able to pick and chose our influences and call whatever we do art. I noted as much two paragraphs ago. But while we are all subsumed in a postmodern world, there are clearly cracks in its description of reality. For example: we can borrow all we want, but we only get to describe our own experience, and really no other. Arthur Danto described this conundrum quite well when he talked about how artists in the postmodern era are able to pick and choose styles from other eras, true enough, but are forever unable to live the contexts that gave rise to them in the first place. This means that artists who try like hell to paint like John Singer Sergeant, or sculpt like Rodin, are not making art, they are making eerie monuments to the dead. Being influenced by them is one thing; becoming them is quite another. Until someone invents a time machine, artists are really only allowed to ask what Right Now looks like, and what is relevant to a description of that experience.

While a suspicion of craft for craft’s sake is legit, a suspicion of craft itself is usually how that suspicion plays out. Which I’m taking some pains to point out is nuts. Understand: denying craft to make a point is perfectly fine. I can see denying the sensual in a sensual experience in order to do that. Also fair is to expand on the idea of what sensual is in the first place. Still good. What is totally crazy is to banish sensuality forever.

But there is a further hurdle for craft to clear, and it’s probably the highest: nobody really wants to become highly skilled at anything anymore. Doing so could take years, and maybe even a lifetime to achieve. Who wants to spend a lifetime doing anything? Our culture is described quite well by the researcher Linda Stone, who identified a mental state she described as Constant Partial Attention. In other words, skimming lightly over everything, and sinking deeply into nothing. Constant distractions, and shorter attention spans, mark the mental state of most of us.

The problem is that tangible skills are still relevant. For example: whatever life choices you have made for yourself, thank a farmer for growing the food for you to pursue it. Somebody, probably an immigrant, stuck their hands in the dirt to make sure you had something to eat. Just as somebody made you a house to live in, a bed to sleep on, shoes to wear, and so on and so on. Our distance from most of this stuff doesn’t mean it doesn’t happen, it just means we’re rich enough to ignore it. Should we care? You get to answer that for yourself of course, but for me, as most of us disappear into constant partial attention, my answer is yes.

The personal is political, and one’s art is one of the most personal, and therefore political things of all. Hands-on skills for the most part are in decline, and I think it is a political act to re-engage with them, a political act that seems relevant to our disassociated postmodern present.  It’s unfortunate to me that artists in particular get really touchy around anyone technically proficient, like furniture makers, or fabricators, or welders, or machinists or even the occasional sculptor. They tell each other that their ideas make them better than mere craftsmen, but most of the time, they struggle to articulate what their ideas actually are. To paraphrase David Hockney, they think they’re poets, but nobody taught them how to spell.

I hope my own work can reclaim some of the agency that I’m not sure I really have anymore. Can I overcome my own Constant Partial Attention? Can I find a level of competence in myself, when I’m not really sure that I know how to do anything well? Can I make something beautiful, rather than say something clever? Can beautiful and smart be the same thing? Can art feel necessary? I want to make the things that ask those questions, and so far I’m not sure I’ve managed to articulate clear answers of my own. But what I do know now, years into all of this, is that thinking about what the answers might be is every bit as rewarding as the making, and that the making led to the thinking. Both form a continuum, always being added to and refined, never becoming definitive, and never finished. And on that note, I have some carving to do.

 

 

 

 

 

 

 

 

 

 

Friday
Feb152013

“The hand is the blade of the mind” ---Jacob Bronowski

Digital

The five digits on the ends of our arms are more complex than any other paw or flipper or hoof or fin or tentacle out there in the natural world. Even the hands of other primates don’t compare, which is what allows us to play Bach fugues on the guitar, or to cut the complicated joinery required to make a Shinto Temple.  It’s no overstatement to say that because of our fantastically dexterous hands we have managed to not only survive as a species, but to thrive. Over the course of many thousands of years we have been able to use our hands to make things such as weapons and shelters, which successfully kept far stronger and faster animals at bay. Realizing we were on to a good thing (and with our survival somewhat assured), we began to make countless other things that stagger with their complexity, beauty, and utility.

Of course, our brains are telling our hands what to do. So isn’t the evolution of our brain a more important point? And the answer is, surprisingly, probably not. It turns out that clever brains and hands are inextricably mixed, and you can’t have one without the other. Anthropologists such as Sherwood Washburn have written that the modern human brain might have evolved because of tool use, rather than tool use appearing because of our brains. As he says: “From the evolutionary point of view, behavior and structure form an interacting complex, with each change in one affecting the other. Man began when populations of apes, about a million years ago, started the bipedal, tool using way of life.” The structure of the arm and hand of the Australopithecus species in particular (from which Lucy sprang), allowed for gripping and holding things that until that time was completely unique. That meant that eventually, our big brains could write those fugues for the guitar, but only because of the possibilities represented by, and as a direct outcome from, our hands manipulating whatever was in their reach. That helped push the development of the brain that sits nestled in our heads today.

Making things is how we got here. It was our evolutionary good fortune, a lucky adaptation that ultimately stamped our ticket to the pinnacle of the food chain. As we evolved, our ability to make things evolved with us. Behavior and structure form an interacting complex after all. Now we have really clever hands being directed by really clever brains. We make dams across raging rivers, we link continents with interstate highways, we travel under the seas with submarines, above the clouds with airplanes, and even enter outer space now and again with spaceships. But while we are able to make these things, less and less of our population are directly involved in the process. In fact, today we live in a world that most of the time no longer requires us to make anything. We are free of the bondage of physical labor in the contemporary world, (except if you are a Mexican living in the U.S., as a visit to any farm, restaurant, or construction site will show), and for the most part we’re pretty happy about that. Making things is hard after all, and when we became smart enough to invent machines to do it, we jumped at the chance.

We have moved away from the brute labor that marked the industrial revolution, and moved towards a service-based economy that for most people has banished a physical interaction with the world. A version of the same thing has happened with our ability to creatively influence our surroundings. As more things are made for us, the more we have become accustomed to plain old buying rather than making.  As Marx said, we have gone from ‘Being into having’, as we transition from seeing ourselves as ‘producers’ in a modernist sense, to ‘consumers’ in a postmodern one.

As we lose the notions of how things are made, we also lose the ability to engage in the complex conversation it took to make those things in the first place.  We are cut off more and more from the world not only physically, but creatively.

Now I know what you’re thinking: We can still be creative outside of the physical world. Think of all the software writing, and the game making, the website designing, and the 3-d modeling going on. Aren’t computers actually expanding on the idea of what creativity is in the first place? And the answer is: Yes! And that’s good. But this observation misses the point. I’m talking about a creativity that exists in what my code writing friends call ‘the meat space’, or ‘the blue room’: the physical world. This distinction is important because tangible skills are important. Try living in a virtual house, or eating virtual food, or wearing virtual clothes. All of these virtual things, I’m sorry to say, are unnecessary. The real versions of food and shelter however, are non-negotiable. Not having a clue about the basics of that non-negotiable reality is what I’m talking about. 

If we agree that we are cut off from the physical world, the irony is that  from an artist’s perspective this shift has meant a certain kind of freedom. Art making has shed its physical skin to become ‘conceptual’, which means it’s an activity that creatively interacts with ideas rather than the making of objects. It stands to reason that art would follow the same arc as everything else did in the transition from ‘Being into having’ that Marx describes, and get itself out from under the drudgery of production, and away from objects in general.

Art historians might interject at this point with a more descriptive explanation about how the visual arts became modern. How the Impressionists saw mimesis as redundant in the era of photography say, or how a new awareness of art from other places like Japan and Africa was an influence as well. The story is indeed intriguing, and long. However, the granular details of the modernist turn in art fails to explain the over-arching modernist narrative that also affected science, medicine, the military, the economy, literature, and so on. In each field the transition was slightly different, and at a slightly different time. But modernism was the paradigm that covered them all. For that reason it’s important to see art as part of a larger cultural progression in which it was merely a facet. And in that progression, art very much followed a modernist arc rather than led it, away from the handmade towards the industrial, away from the mimetic towards the abstract.   

But we no longer live in a modernist era. Another shift, and a new paradigm has replaced it. No longer do we share the modernist’s faith in a logical progression forward, replaced instead with the confusing set of possible paths of postmodernism. It’s important therefore to describe the shift away from a modernist perspective, towards a postmodern one, because postmodernism provides a more complete portrait of who we are now, whether we like it or not.

Post

If you are uncomfortable with the idea of ultimate truth, or of objective knowledge, then you are postmodernism. If you understand and laugh at ‘South Park’, because they make fun of everyone, and never chose a side, then you are postmodern. (Basically, if you watch and enjoy television you are postmodern). If you see creativity as bricolage rather than the search for pure originality, you are postmodern.  If you see the world retreating into enclaves of people promoting their own versions of truth, because there seems to be ‘proof’ of any position imaginable, then you are postmodern. If you are uncomfortable seeing yourself as postmodern, because you don’t really believe in labels, you are postmodern.

In stark contrast to that, modernists had no problem at all with labels. The visual artists of that era enjoyed nothing more that coming up with definitive versions of art, and engaging in the resulting blood-sport of defending them. "Modernism, overall, was the age of manifestos", as Arthur Danto writes, and each new movement set out to quash the others that came before it. The historian Phyllis Freeman has counted as many as five hundred such manifestos, some of which, like the surrealist and futurist, are “Nearly as well known as the works themselves”, again, quoting Danto. Pretty much all of them defined themselves in opposition to what came before, and pretty much all of them described themselves as the real essence of art, and all others to be false.

Danto one more time: “ In 1913, Malevich assured Matiushin that “the only meaningful direction for painting was Cubo-Futurism.” In 1922, the Berlin Dadaists celebrated the end of all art except the machinekunst of Tatlin, and the same year the artists of Moscow declared that easel painting as such, abstract or figurative, belonged to an historically superseded society. "True art like true life takes a single road" Piet Mondrian wrote in 1937. Mondrian saw himself on that road in life as in art, in life because in art.”

It should be noted that Mondrian severed his relationship with his friend and collaborator Theo van Doesburg, because van Doesburg believed that diagonal lines were valid in abstract art, an idea totally unacceptable to Mondrian. Definitions as rigid as that are nearly incomprehensible to artists today, because art can be anything an artist says it is, an open-ended definition that has only existed in the minds of postmodernists. Which is to say, us.

If modernism springs from the Enlightenment, than our current take on things springs instead from Existentialism. This flies in the face of what had come before it, which was a belief in a steady progression towards better things. We are unsure of that idea today, to say the least, and it’s easy to see that profound change in perspective all around us. A few examples: Isaac Newton described a universe that was run on immutable laws. And then Einstein came along and said that everything was relative. Heisenberg identified the Uncertainty Principal. Now scientists look for Dark Matter, which can’t be seen, and debate String Theory, which can’t be observed (much less proved) because much of it happens in other dimensions.

Global finance used to be run by central government banks that pegged their currency to gold. Now currency is literally an abstraction, beautiful etchings on paper, whose worth is based on fiat. Wealth is run through corporations trading complicated financial instruments made up by math majors. People used to make money by providing tangible things, and now they sell Futures, or Derivatives. Wealth often times is in the form of stocks, whose worth is pegged to the vagaries of an ever-shifting consensus.

Societies used to take for granted their own regional versions of architecture, food, and language. Now the effects of colonization, war, technology, and global immigration have created a massive diaspora that has re-shuffled formally homogeneous societies in to something far less so. That has led to a re-thinking, or even re-definition of social hierarchies, beliefs, and traditions. Regional food and architecture are the stuff of theme parks and chain restaurants now. Even the roles people play within these new societies are in flux. It’s not just new languages and food we are accommodating, it’s new versions of parenting, gender roles, work relationships, and communication itself. The fracturing of recognizable things into their less clear parts is not merely a trope of postmodern art, it is a reflection of almost everything around us.

Along with that fractured and unraveled feeling is something even more unsettling, which is this: we have it within our grasp to fuck everything up for good. This ability is derived not only from our technological know-how, but the way we wield that know-how to extract, kill, mow down, dredge up, emit, burn, use, and otherwise run roughshod over the things in our way. We live in an era described as the ‘6th great extinction’, with massive loss of animal and plant species. The other five great extinctions were caused by things like ice ages, asteroids, and volcanic eruptions. The 6th is being caused by us.

Maybe most strikingly, we have ‘Real’ problems that need addressing, and so assessing what  ‘The Best’ solutions to them are would obviously be in our interest. But absolutes like ‘Real’ and ‘Best’ have been compromised. Objective truth is just not what we do anymore. In fact truth itself has been re-defined along postmodern lines, along with everything else mentioned above, to the point that ‘Truth’ with a capitol ‘T’ now holds less sway than a murky kind of relativism.

In a postmodern world, debating one possible solution over the merits of another ushers us into a hall of mirrors rather than a clear path. Facts and truth and sincerity are suspect most of all, because we are familiar with them as strategies used in ads to sell us stuff. That means that despite the dystopian future many people believe we face, we have an inability to speak rationally about any of it, because facts and truth and sincerity themselves are compromised. They seem like relics of modernism, back when we actually believed that reason would get us through this stuff. We don’t feel that way anymore. Reason itself as a path towards a better future is exactly the thing that postmodernism rose up to challenge.

If the above seems like it undermines the basis for critical thought itself, you’re right. It’s easy to see why the philosophers who initially set out to identify postmodernism did so with a palpable sense of sorrow. In the case of Guy Debord, he actually tried to come up with a strategy to get us beyond it, via the Situationists. He didn’t get that far. In 1994, Guy Debord committed suicide.

Today we are accustomed to living a life accommodating the conflicting feelings of crises and helplessness, which together form a jittery kind of apathy. And jittery apathy describes the contours of postmodernism itself. Nobody is saying that the days before modernism were great of course, (though the thought does occur that they might have gotten a bad rap), but life in the 20th and 21st centuries clearly have some not-so-small problems attached. We managed to fight two world wars in the first half of the 20th century for example, and spent the rest under the pall of nuclear Armageddon. The irony is that many of our mistakes were brought on as the unintended consequence of trying to make things better, the genesis of the modernist impulse. The retort to that impulse writes itself. It’s easy to see how the rise of postmodernism, which sees the idea of pure reason as a panacea to our ills as a delusional fantasy, was a virtual guarantee. And so it is that we have traded the feeling of a having a single and rigidly proscribed direction forward, for the freedom of having no clear path at all.

Spectacle

Life today embodies a kind of mental and spiritual abstraction, with definitions of individual values and goals being up for grabs. As we settled into this postmodern state, consciously or not, we moved beyond Marx’s observation of ‘Being into having’, into what Guy Debord identified as ‘Having into appearing’. As we accommodated a world where our role is to consume, the value system shifted to what we consume.

Debord called this state of affairs, no surprise, the ‘Society of Spectacle’, which is where we find ourselves today. In a world that understands imagery as a kind of true Esperanto, it would seem to me especially important for artists to understand what Debord described. It’s their job to make images after all. But contemporary art can often be less a critique of this state, or a method of revealing it, then just another example of it in a very pure sense. Artists today become brands, and their very actions result in creating value that is often free of an object, or even any kind of tangible use. So it is that many artists have become willing participants in all of this, as any trip to an art fair, biennial, or auction will attest. It’s not crazy to see the contemporary art world as the culmination of what Debord was talking about, the true distillation of the Society Of Spectacle. The transgressive, libertine model of artist behavior, courtesy of the modernists, has turned into a postmodern brand that is reliably marketable. For that reason the idea of trying to poke holes through the spectacle wouldn’t occur to many artists today. Postmodernism has been great for business.

If Debord is correct, than the ‘prestige and ultimate function’ of anything, art included, is an abstraction twice removed from objective reality. While the truth of that assertion is worthy of debate, it certainly is correct that the idea of making anything at all has an anachronistic air about it these days, which supports at least a part of Debord’s theory. In our day, the maturation of the modernist idea of removing the hand is easy to see. And while lots of big name artists rely heavily on fabricators to make things for them at a very high level of craft, most wouldn’t have a clue about how to do much of it themselves. That lack of knowledge is orthodoxy, not only in the art world, but in the culture at large. That’s because the spectacle is designed to make sure you don’t know how to do much of anything other than to consume. To make sure you don’t creatively interact, but passively receive. And as the spectacle has gathered steam, which it still appears to be doing, our ability to make things has for most of us been willingly set aside in order to participate fully in the spectacle. Those who persist in making objects tend to be seen as mere craftsman, rather than artists following a certain traditional historical thread. But getting out of the way of the spectacle is the thread.

 

 

Saturday
Jul072012

Destroyer

Cut the prayer-hand

from

the air

with the eye-

shears,

lop its fingers off

with your kiss:

 

Now a folding takes place

that takes your breath away

 

-Paul Celan

 

Destroyer

Destroyer is the name of my show that will go up in August at the Greg Kucera Gallery, and run through September.

 I’ve often noticed during the course of making a carving that the pieces of wood I cut off and throw away are often every bit as good as the parts I save. The distance between the ‘waste’ and the ‘art’ is often the thickness of a saw blade. This observation has led me to make a body of work where rather then separating the scrap from the sculpture, as I usually do, I have instead kept them joined. The juxtaposition of conscious choice next to unconscious consequence has become the conceptual starting point for the work.

 By plunge cutting sections out of the blocks with a chainsaw, and then carving a series of chains at one end, the limbs are articulated out in one piece. I wanted there to be a sense of freedom or even abandon in the limbs, even as they remained chained to their heavily rooted cores. The freedom they have engineered is real enough, as are the limits that they are literally attached to.

 There is always the feeling when I carve of being caught in a landslide, as the dust and shavings fall. Learning to ride a wave of potential disaster not only successfully, but gracefully, requires figuring out how to choose the least worst path along the way. For this reason, carving seems to illustrate very clearly both entropy, where complex systems break down, and serendipity, where good fortune is culled out of an uncontrollable set of circumstances.

 In a similar fashion, who we start out being isn’t exactly who we are in the end. But who we end up being depends on what we have to work with at the beginning. The entropic process of our own breaking down is tempered by our ability to manage the serendipity of the fall. We are our own work in progress. This work documents the hopefulness, inventiveness, and the violence, of that process.

Friday
Jul062012

I Heart Public Art

 

I public art

 

I love public art. This puts me at odds with pretty much everyone I know in the gallery world, for whom public art represents everything they either despise or fear: lack of control, compromise, having to dumb everything down, uncomprehending audiences. But I myself happily move between both worlds, doing public commissions and gallery shows, feeling equally content with each. I will admit that it’s a surprising turn of events for me, partly due to the fact that my own entrance into public art was somewhat late and improbable, but also partly due to the fact that being there has made some things clear about the gallery world that I couldn’t quite put my finger on before. Public art emerged as an escape hatch out of a place that I didn’t think needed escaping from, allowing me to understand that it’s easier to see where you live when you cross the street once in awhile. Even from the short distance I now have it’s easy to see that all of the criticisms of public art turn out to have either a corollary, or an equally egregious opposite in the gallery world that go unnoticed or ignored by most artists, such as having too much control, an inability to compromise, and frequently garbled, and unquestioned, intellectual flights of fancy. It’s barely worth mentioning that both sides have the uncomprehending audiences in common. Two sides of the same coin it would appear, and while both worlds have their problems, (no use sugar coating a turd here), I’ve come to realize that leaving each of them from time to time makes it a lot easier to jump back in with both feet.

Full disclosure: public art in the last several decades mostly sucks. The visual tics of breaching salmon, or utterly anonymous (but friendly!) non-objective forms are absolutely and legitimately mock- able, and most people rarely pass up the opportunity to do just that. But in truth gallery shows are more often misses than hits as well, despite the freedom that artists have to make or say whatever they want, and the confidence they derive from addressing a self-selecting and sympathetic audience. The stark success to failure ratio of most art projects, be it in a gallery or a public park, tells a simple truth; making something good turns out to be just plain tough, and making something great turns out to be just plain unlikely. Art is hard. Still, good and even great things are being made now, in both the public and private realms, with a frequency that defies the odds. And that’s good. But less good is the way that public art lags in some ways, due in no small part to the commonly held view that public art is still the lesser of the two avenues for an artist to pursue. This view discourages many gallery artists from participating in public art projects almost out of reflex, rather than out of informed scrutiny. But a little scrutiny might be a good thing, because as I found out, learning about the public art world has a way of illuminating the gallery world as well, a fringe benefit that turned my conventional view of things upside down. 

First, some context. It’s key to remember that public art as we know it is a thing still being born.  In the wake of High Modernism ‘Public Art’ slowly atrophied, to the point that what we are seeing now is an almost total re-invention of a practice that has for many decades been de-coupled from architecture. The term itself, ‘Public Art’ is a relatively new invention, and would probably be meaningless to the architects of Rockefeller Center in New York say, who seamlessly integrated sculpture, bas relief, and murals into brilliant architecture, or to someone like Gustav Vigeland, the author of 212 sculptures in an 80 acre park in Oslo that wonderfully integrates art into landscape. ‘Public Art’ even in the 30’s and 40’s was still thought of as being an integral part of architecture, or of landscape, and not something separate from it. The blurred lines between the two disciplines hardened and separated somewhere in the not too distant past, and it is unclear if we have figured out a way to put them back together yet.

Part of it was just bad timing. As the 70’s loomed, and modernism was running out of gas, artists were moving away from making objects anyway. Modernist architects were never big on ‘decoration’ to begin with, so the resulting drift between architects making unadorned boxes, and scruffy hippies pushing around dirt in the desert is an easy one to predict. The result was a loss of at least a couple generations of artists and architects who could really talk to each other. Art schools responded to the new shift in thinking by becoming more intellectual affairs than in the past (at least according to their defenders), and vastly pared down the teaching of technical skills, their reason for being up until that point, in favor of seminar classes in art history and theory. The result is that today there is very little training for people who are interested in making public work, (other than a new program at the UW!) and certainly very little training in project management, budgeting, or even of broad visual literacy inclusive of architecture in art school. The difficulties of such a situation are self evident for those making public work, and the only solution we seem to have come up with so far is simply on the job training, which sometimes works, and sometimes not. Be that as it may, starting in the late 50’s in Philadelphia, the 1% for art program was launched, an idea that has slowly, very slowly, (and not universally) spread around the country. Now public art is often a legal mandate, and artists and architects are partners again in a relationship that can resemble an arranged marriage more than simply two great flavors that taste great together. The result of this arrangement has been to create a style with the unfortunate moniker ‘Plop Art’, which despite its unfortunate image, accurately describes how a piece of art ends up in front of a building. It’s the last thing there, and the architects for the most part have had no input, or interest, in integrating it into a context larger than a predetermined concrete pad.     

It doesn’t take a very long look at the history of either art or architecture to notice that this estrangement between artists and architects is new. It needs be said that most art in the Western cannon is public art. That is, made for the church or state in some capacity. And it also needs to be said that the line between what made one an artist, and what made one an architect where very fluid, which is why Michelangelo, Bernini, and Brunelleschi (who trained as a goldsmith), were able to take on massive architectural projects. That began to change with the influence of the Dutch, the same people that brought us the tulip craze. In the 17th century they created what was to become the gallery system as we understand it today, a system that led art away from a mostly public endeavor to one that was (and is) mostly private. And while we eventually figured out that making tulip bulbs worth astronomical sums was nothing more than a bizarre fetish, it remains unclear why we have done something quite similar with visual art.

The Dutch system relied on the merchant class, people of some wealth, who were looking for a way of buying up the cultural ladder. The same of course holds true today. The idea of art having a more general audience regardless of class is even newer; that happened after the French Revolution, when the Louvre was opened as a gallery for a general audience. The idea of a public art institution was born then and quickly gained wide acceptance.  But the buying and selling of art, the disseminating of paintings and sculptures, was already set; it was designed as a top down enterprise, with the wealthy buying it and supporting it, and so it remains unchanged today.

There is a problem however with a top down method of cultural support, and it is a big one: it never works. Culture is a thing that happens from the bottom and moves up. Never is the process reversed, though the Dutch gallery system that we have inherited has taken a stab at doing just that for the past three hundred years. When a top down process is brought to bear on an art form, the results are usually to tame an unruly thing, and domesticate it. Take two short examples; jazz and opera. Jazz was once the most popular musical form in America, an exotic, primal, even dangerous music that drove people wild on the dance floor. Now it is a degree program in music school, and it’s most common provenance tends to be cruise ships and hotel lobbies, where it mostly minds its manners and stays out of the way. Opera also ruled supreme at one time as the most popular art form of its day. And why not? It mixed outrageous spectacle with paper-thin boy-meets-girl stories, and cranked up the volume to eleven. It has been rightly said that all of popular music has used this template, but what gets forgotten is that opera itself borrowed its winning formula from folk songs, and folk stories, adding complexity and bombast to gain more of an audience. Now of course, opera is identified with the upper crust of society, and viewed with outright suspicion by those of the middle and lower classes, who of course are an overwhelming majority of the people. If the current music world were to be thought of as a tree, jazz and opera would be small dry twigs far from sunlight. Their top-down incarnations are the shriveled and less vibrant versions of their bottom up origins. The examples I could name are not just relegated to music, they are in virtually any genre; dance, fashion, design, all are fed from a bottom up system. It is quite a bit more challenging to find vibrant parts of the culture that began at the top and moved downward.

This top-down bottom-up observation raises a couple of commonly held views about visual art. The first being simply this: so what? Art is elitist anyway. I hear this from all sorts of people in all sorts of different contexts. But the answer is a simple and unequivocal no. Art exists in every culture for a reason: it helps people to digest the experiences of their lives, it strings together generations by reminding them of the past, it acts as the connective tissue between people who are misanthropic by nature. I have seen the effects of this in action. When I was young I lived in an Eskimo village on an island in the Bering Sea. When the men would come back from hunting, they would sit with their families and tell stories, or carve ivory, or drum while the women would sing and dance together. They would make art, essentially, and transfer their shared identity to their children in the process. The need to do this wasn’t a sign of highbrow tastes or any other kind of brow for that matter. It seemed to speak more to the fact that reflective beings, who can understand the concept of the past and the future, and who can picture their own demise, need to make some semblance of sense out of all of this stuff we go through. No one is going to solve any of it of course, but no one is let off the hook ignoring it either, and art in its many forms is part of that investigation.

The second question raised by the top-down approach of the gallery system is that it hasn’t exactly killed art off. In fact there is an argument that it is more vibrant than ever, with a booming gallery scene, lots and lots of artists, record auction prices, etc. All true. But part of that is simply due to the fact that what is really booming is the creation of extremely wealthy people, not only here in the U.S. but also in places like Russia, India, Brazil, Mexico, and other corners of the world that had up until recently hadn’t had many. But a top down model means that the money is spent by relatively few people, and ends up in relatively few hands. Most artists, a staggering 95% if the statistics can be believed, stop making art five years after leaving school, mainly due to being really really broke, and realizing how completely un-romantic it is. And despite the fact that Seattle is a pretty good art town relative to its size, there are perhaps 10 or so collectors here, maybe less, who regularly buy the work of local artists from local galleries, making it clear that a very small group of people get to be the arbiters of what becomes visual culture.

Making those choices has become more and more challenging as the number of artists has mushroomed exponentially all over the world, straining to the point of incredulity the idea of a few key collectors and curators consistently making the right choices about what is culturally relevant. Thumb through a 10 year old Art Forum, or even a 10-year-old Christie’s catalog and it’s soon made clear that a large number of those artists have disappeared. While it’s true that Kings and Popes made fine choices when they ran things, they did so with a far smaller pool of artists to choose from. Pope Paul V famously gave Bernini a handful of gold coins when he was a boy of eleven to pay for his art education. The buzz was already on him, and the Pope knew about it. I’m not sure that the current Pope, much less a hedge fund manager or a Russian oligarch would know where to begin in getting a sense of the contemporary art world, and I’m not sure anybody else does either. What seems clear is that some semblance of a bottom up system must be employed as a way of vetting the slim number of choices that are being made by collectors, if an accurate reflection of current culture is the is to be the goal. At some point the culture itself must weigh in on whether or not this goal is being achieved, and the system as it exists today simply doesn’t allow that to happen. Galleries do what they can, opening their doors to an overwhelmingly non-paying audience, and showing young and untested artists regularly. Still, the audience that galleries attract is relatively small and self-selecting, and they have little say in what is good, or not good, or why. Galleries don’t provide a ballot box next to the cash register; there is simply a cash register, and the artists who get people to vote with their dollars stand a far better chance of advancing in their careers then those who do not.

Many galleries now don’t ‘sell’ pieces, they ‘place’ them, meaning that they find the best collections with the highest profiles, and make deals with collectors about donating the work to institutions, rather than re-selling it at auction. This system does an end run around any kind of broad agreement on what is good or bad, and can propel artists right into museums and institutions with very little input from an actual audience. Artists who find themselves in this happy circumstance are rarely demoted, because to do so is to affect the sale price of their work, which rises significantly with an institutional seal of approval. The work that these artists make can become highly sought after simply as investments as the prices for contemporary art seems to rise ever higher and higher. Meanwhile, the aforementioned collections of wealthy patrons, that exceedingly small group of people, end up forming a supply line of art to museums unable to afford much of it themselves. And so it is that museums today are not so different now than they were at their birth in 1793, collections gathered and bought by the rich, and looked at with curiosity by a general public removed from the process of how it all got there. 

The capricious abstraction of what supposedly becomes visual culture is hard to track in a museum or gallery setting. The jury is out as to whether or not it even works. Certainly curators have been interested in art made by skateboarders, or ‘outsider artists’, which really means people who haven’t gone to art school, so in a sense even those within the system have their doubts as well. Which brings me back to public art. Which I mentioned that I love. Not because I think it has solved the top down dilemma that gallery art faces, not just because of its clear acknowledgement of art’s utility, not just because of the way it eliminates preaching to the choir by venturing outside of art’s balkanized environs, not just because of the way it allows artists to expand on what they do in terms of scale and materials, and not because it circumvents museums and allows work to be permanently displayed in a public setting. No, even more important then all of this, (though all of the above is immensely important) I love public art because it allows artists to be full-time in the studio, making not only the public work that put them there in the first place, but actually subsidizing the creation of gallery shows as well. It allows artists to be artists.

Artists making art in their studios, it must be understood, is a rare and beautiful event. The realities of money and the nagging necessity to make more of it constantly pull artists away from their studio practice, making most artists part- timers, squeezing what they do in between jobs, or after work. Time equals money, true enough, but for most artists, money equals time, lots and lots of time, a huge chunk of their lives that is devoted to something other than making stuff in their studios. The most obvious solution to this dilemma is simply to make one’s living as an artist, and the most obvious sounding way to go about doing that is to be represented by a gallery. So let’s run a few numbers. In order to make $30,000 a year as an artist showing in a gallery, one must sell a minimum of $75,000 worth of stuff. This figure assumes the industry standard 50/50 split of art sales with a dealer, plus studio rent and expenses, which I list here at a modest $15,000. Subtract some non-negotiable things like taxes, which are alarmingly high for the self-employed, as well as health insurance, tools and materials, fabrication expenses, vehicles, etc, and that $30,000 profit seems like a pretty tall mountain to scale. Remember also that solo shows are rarely less than two years apart in any one town, so doing shows outside of where one lives becomes essential. To those shows held somewhere else, add packing and transportation costs, which most galleries only partially pay for, and assume that there will be damage along the way. And about that gallery out of town, congratulations! Getting a show in your own back yard is difficult, but to get one somewhere else is a far more difficult assignment; doing so lands you into a very slender minority. But be aware that all galleries are not created equal; some are able to sell work more often, and for more money than are others, so it is also essential in this equation to be in one of the better galleries. Oops, I mean in two of those galleries, one in the town that you live in, and then another one in New York or L.A. Most ideal would be to have three galleries, all top shelf of course, in the major art centers, (which are really the major collector centers), and then one where you live. Assuming one is able to get into the right galleries and have some successful shows, the next hurdle to surmount makes the others seem like speed bumps; the constantly changing and fickle art market, which can snuff a career full of steam so fast it seems to defy physics. To paraphrase Robert Henri, art is the best pursuit in the world, and the worst job. So how to solve the realities of paying rent and eating food?

The first public art commission I landed paid me $108,000, which was more money than I had made in my previous 12 years of showing in galleries. In order to get the gig, I had to describe my proposed project to a room full of mostly non-art people, including a building maintenance guy. I had to make sense to the janitor! I stumbled and fumbled, the maintenance guy and a property manager asked the most insightful questions, and I was awarded the commission. This was a mind-blowing, life-changing event. The profit I made allowed me to quit my day job, and concentrate full time on my next show.  The extra time I was able to put into the studio reflected in the work, which is not so surprising. It turns out that spending more time making and ruminating on something results in a better outcome. The work I made quite clearly reflected the extra time and attention, and sold reasonably well as a result. That tided me over until I got my next public art gig, and so on, forming a cycle that has allowed me to cobble together a living out of being a full time artist. My story is not a unique one. There are a handful of local artists who have had a very similar experience, due in no small part to the very forward thinking public arts administers in this town, who know exactly what the benefits of a commission for real money will do to an artist’s life. And the benefits are hard to overstate.

I love the mural of the stars on the ceiling in Grand Central Station. I love the Nebraska state capitol. I love the peacock in the downtown Seattle Library. I love the giant teddy bear in Central Park, the reflecting orb in Millennium Park, the cast tree root in Bellevue. Whether or not public art in its current 1% state has produced a Rockefeller Center yet is a moot point to me. Gallery shows are usually bad too, and I keep going to see them. The Princess who realized she had to kiss a lot of frogs to find the Prince got it right, and when I go see shows I try to remember her example. But it must be said that despite the good things happening in both public art and gallery art, it’s fair to say that moving from one to the other has made me aware that both are calcified, and have a tendency to retreat quite comfortably into bubbles of their own making. I’m not advocating for fusing these two bubbles together, because then you either get one enormous bubble, or worse, three smaller ones. A better solution to me would be for the inhabitants of each to simply step outside every now and then and take a look around.  It would be a far less attractive alternative for artists to stay comfortably ensconced inside their chosen bubbles, where perfectly reasonable things can begin to sound ridiculous, simply because they clash with an unspoken consensus view. Take for example the idea that gallery artists’ art could, and should, involve a general audience; such an idea strikes many artists as just not their job. Sure, somebody like Shakespeare had career goals that meshed a lot more with Stephen King than with James Joyce, and sure Bach wrote music for his local parish church, but those are crazy exceptions. Right? I don’t think so. Things that attain a kind of eternal quality assume, and accept, a wide audience, while most contemporary art does not. It remains fixated on the idea that it critiques the culture, or even defines it, even though most of the culture doesn’t pay it any attention. Robert Motherwell was long ago quoted as saying that art at the end of the 20th century would be a battle between Picasso and Duchamp. Needless to say, Picasso has gotten his ass handed to him in a bag. Conceptual art has become the new orthodoxy, rooted in something that was hard won, and enduring, and has since evolved into something that is now too frequently just facile enough to feel rote. Much like the fate of modernism in another era, whose hard won principals became short cuts to lesser practitioners, many artists today seem quite content to be merely clever, and squirm a bit at the notion that one would ask what the ‘concept’ is in the first place.

Public art is in quite a bubble as well. It is fixated on trying to be art, without the teeth. And I think it is quite possible to include teeth in public art. When Brian Eno was asked his opinion of New Age music, which he is generally credited with inspiring, he said he didn’t like any of it because it lacked a sense of evil. True, that.  When public artists voluntarily dumb things down, erase the evil, they ultimately come across as being condescending. People aren’t stupid; they can usually sense the artist’s lack of trust, or even respect, and the consequences to that are as predictable as they are unattractive. What happens is that frustrated artists make boring art, which both the artist and the public, through a long and intertwined process have had a hand in creating. This of course means that both sides can, and do, legitimately blame the other for the outcome, and along the way confirm their stereotypes of the other. Did I mention that art is hard?

The good news is that art is an unstoppable thing that defies all of the complex barriers we have erected in its path, and the even better news is that we are at a place in art history where so many things are possible. The dominant question of our era is not about what art could or should look like, where it should go, or who should make it. For hundreds of years artists have explored those subjects, and now we find ourselves in the post-post-modern era with those issues resolved. Now we can ask ourselves what can art do. And in answering that question, the first thing we might consider is to open the doors of all the white cubes and starchitect museums around the world, and let all that art, and all those artists, wander outside for awhile.

 

 

 

 

 

 

Page 1 2 3