All posts by admin

User-Centred Design

In essence, this is a set of design principles which prioritises the user, and the nature of the user’s experience of the product during the design process. A key point is that the user, as the end target of the design process, should be involved at every stage along the way. Also, the philosophy is that the manufacturer should design a product that the user would need and want to use, not try to convince them to use something they don’t want.

There are 6 principles set out that should ensure a design is user-centred –

  1. The design is based upon an explicit understanding of users, tasks and environments.
  2. Users are involved throughout design and development.
  3. The design is driven and refined by user-centered evaluation.
  4. The process is iterative. (contains procedures which can be repeated to get better results)
  5. The design addresses the whole user experience.
  6. The design team includes multidisciplinary skills and perspectives.

The UCD idea seems to be mainly applied to the development of software, and the development of websites in particular. A major part of the process is the “rhetorical situation”, and in this case the rhetoric appears to be whatever message you want to communicate to your audience.  The rhetorical situation has three components; audience, purpose and context.

“Personas” are developed as archetypal focuses of the design process. They are a sort of distillation of all the types of people the design is aimed at all rolled into one person, from information gathered from interviews. There’s also a secondary persona, and even an anti-persona, to represent the kind of people you’re specifically not aiming your design at.

“Scenarios” are produced where a period of time or sequence of events is played out in written form, with the personas appearing as characters within them. They can represent best case, worse case or  “meh” case situations. Since the personas are given names and backstories, the idea seems to be that the people designing the product have a more concrete point of reference for their target audience and can work in a more informed way.

 

Nozze di Cana – Latour & Lowe

Good article about the painstaking fabrication of an exact replica of the painting “Le Nozze Di Cana” by Veronese, which is currently in the Louvre in Paris, having been removed in 1797 from the Fondazione Cini in Venice. The replica now hangs in Venice in it’s rightful location. An extremely advanced, painstakingly complex procedure was followed to scan the painting section by section, and print an exact copy of it onto a specially prepared gesso surface, with a custom-made printer.

Öèôðîâàÿ ðåïðîäóêöèÿ íàõîäèòñÿ â èíòåðíåò-ìóçåå Gallerix.ru
“Le Nozze Di Cana” by Veronese

The essay focuses on the relationship between the original in Paris and the replica in Venice, and examines the irony that the replica, despite being clearly labelled as such, appears to be the more authentic work, as the artist had painted the original to fit entirely within the context of the Fondazione Cini, in terms of the architecture, lighting and general context. The point is made that when considering the impact of an artwork, we need to consider its context in terms of all the reproductions or copies that have been made of it, and how good they are in terms of quality. The argument is that if an artwork has spawned lots of quality copies, and therefore has great fecundity, the original will be valued even more greatly than if it hadn’t, and this overall picture (what they call the “career” of the artwork) must be considered. The “aura” of the work is brought up, the usual argument being that only the original can truly possess this aura. But we’re asked to consider the example of a Shakespeare play, which may be have been interpreted many times. If there’s a particularly good version, critics may claim it gets closer to the original intent of the work than has been done before, giving us an ever clearer version of the “aura”, and we don’t even think to mention notions of originality or copying. So why one rule for the performing arts and another one for visual art?

They argue the point that on the basis of the amount of effort, expense and resources required to produce a replica of a painting, less is required to produce the replica than the original, justifying the perception of its superiority. Every time a new version of “King Lear” is performed, it takes a similar amount of effort and resources, leaving us with no real sense of a tangible gap between original and subsequent versions. A good point is made about caring for original works of art, that even if no physical reproductions are made, in order to maintain a single work of art it needs to be cleaned, repaired and restored periodically. So even with the original, there appears to be no single, constant state in which it can be said to exist.

These are all good points, but to me, it seems the intent of a reproduction is the important  thing. If the intention is to simply ape, or mimic an artwork, then this is inherently less worthy an endeavour than a reinterpretation or reworking would be. It doesn’t matter the amount of effort involved; you could argue that somebody who paints a reproduction by hand will put as much work in as the original artist, maybe more in order to successfully mimic their style. The important factor is that there is no creative agency there, or valid artistic reason to make the copy, unless it’s being explored through another medium or style. That would bring it closer in line with the performing arts examples.

A really good example of an artist whose work speaks to all these topics is Sol LeWitt. He produced a long series of wall drawings during his career which he would distribute to multiple galleries in the form of written sets of instructions, which technicians would then draught on the wall. From one gallery to another, the same piece would of course vary slightly, and the question raised was which one of those is the definitive version, or was the written set of instructions the real artwork?

 

Collaborative Writing Contribution

This is my contribution to a collaborative writing project, which had a loose theme of “The Highs and Lows of Social Media”….

 

“Twitter is hard, and scary, and hard, but people keep saying it’s currently the most important social networking tool, and absolutely vital to any kind of worthwhile online presence. So no pressure then. But as a newbie, where are you supposed to start? Well, with an example of a really good tweet I suppose. Let’s see…

‘Facebook is down. Please refrain from genuine human contact until the problem is resolved.’

Now that’s good! It’s satire, because it implies that people who use social networking sites have no actual interpersonal skills, but it also, you know, is a piece of social networking itself. So it’s a kind of double strength satire. It turns out this tweet was written by no less a person, I mean Person, than God. Despite shouldering the responsibility of creating and overseeing the universe, apparently He can still find time to tweet on topics as diverse as Wimbledon, Donald Trump’s presidential aspirations and gun ownership. It turns out that this deity, as is the fashion, has a human incarnation on Earth. David Javerbaum is an Emmy award-winning American comedy writer whose writing credits include The Daily Show with Jon Stewart and The Late Show with David Letterman. He has been running @TheTweetOfGod since October 2010, and has to date gathered an impressive flock of over 2.15 million followers.

Jauverbaum’s background as a professional writer shines through on his Twitter feed. Frequently hilarious, often topical and regularly poignant, his tweets are the best lesson I’ve seen in constructing messages of 140 characters or less. Assuming the identity of one of history’s most feared / worshipped creators gives him a unique perspective from which to comment on humanity’s peculiarities. You might assume that being omnipotent and omniscient would leave God with nobody to look up to. He does however, follow just one person on Twitter – Justin Bieber. Justin has almost 70 million followers, and if there’s one quality you’d think the inspiration for so many major world religions would appreciate it’s popularity. Except now Katy Perry has even more followers. Sort it out God.

Hardly the most avid of social networkers myself, I have only recently set up a Twitter account. I’ve barely used it yet, and with an almost saintly restraint have resisted the urge to indulge in that most basic of human instincts and tweet endlessly about cats and how cute they are. Just as many people of a religious persuasion turn to their God(s) in times of need for solace and guidance, I turn to mine in the hope that some of His literary technique, satirical turn of phrase and 2.15 million followers might come my way. I’ve compensated for my own lack of original content by retweeting Him (I may not know much about microblogging, but I know what I like.) Though my start on Twitter has been a little shaky, I know I can do better. With God’s help.”

Human or Machine: A Subjective Comparison of Piet Mondrian’s “Composition With Lines’’ (1917) and a Computer Generated Picture

This is a mindmap based on a text published in “The Psychological record” in 1966 by A. Michael Noll, an American computer engineer. Entitled “Human or Machine: A Subjective Comparison of Piet Mondrian’s “Composition With Lines’’ (1917) and a Computer Generated Picture” It describes how he took a famous 1917 painting by Mondrian as a starting point and developed a computer simulation to replicate it. This was then shown to a group of test subjects and their reactions to both the simulation and a reproduction of the original were recorded, with surprising results.

It offers some interesting speculation into the possible working methods of Mondrian at that time, and raises questions about the nature of the artist’s ability to communicate with and provoke an emotional response in the viewer by using purely abstract forms. It also gives a good insight into the state of computer programming and imaging hardware in the 1960s.

Noll's Mondrian Mindmap

Conor McGarrigle Artist Talk, Glucksman Gallery

A very informative talk by artist Conor McGarrigle, given in the Glucksman Gallery on 29/10/15, as part of their series of talks coinciding with the George Boole themed show “Boolean Expressions” currently running there. He used the talk to give an overview of his own practice, which incorporates considerable use of technology, the internet, databases and data collection systems. To give context to his own practice he gave quite a comprehensive overview of the history of artists’ use of similar tech in history, tracing a line extending from the 1960s to today. Some of the content was surprising to me, as some heavy hitters from that era, such as Robert Rauschenberg and Jasper Johns, artists previously well known to me as innovative painters and sculptors, conducted a lot of experimental work with what was then newly emerging tech.

Three main aspects of artists’ engagement with emerging technology were identified. These were; system – artists being able to incorporate existing technological systems into their work, communication – the first communication satellites were now in orbit, offering unprecedented scope for connectivity, and Utopia – the exciting prospect of a connected world, the breaking down of barriers between art and the sciences etc. The early days of this endeavour were marked by difficulty in artists and other creatives being able to access this tech, as it was new, owned by big corporations, and hugely expensive and intimidating. This was followed by a period from the early nineties on, where a far greater availability and accessibility to technology, largely afforded by the emergence of the internet, led to a far more independent, anarchic period of activity. However, McGarrigle sees in recent times something of a reversal of this trend as artists need more help in gathering and dealing with the huge volumes of data currently being generated.

A major player in this history was the electrical engineer Billy Kluver. From 1967, he was involved in Experiments in Art and Technology (E.A.T.) with the artists Robert Rauschenberg and Robert Whitman. This group was set up to initiate collaborative projects to allow artists and engineers to work together directly. This I found particularly fascinating, as though I was familiar with Rauschenberg’s work as a painter and installation artist I hadn’t realised he was involved in such cutting edge research at such an early point.

A. Michael Noll’s Computer Simulated Mondrian

Recently I began to get interested in using computer programming tools to attempt to replicate paintings from the early Modernist period, particularly those in the style of geometric abstraction. There were a number of reasons for this. Firstly, I have long been a fan of these works, from artists such as Kazimir Malevich, Piet Mondrian and El Lissitzky. Though their works, to the modern eye, may seem a little tame and academic, their departure from any type of representation was at the time truly revolutionary, living as they did through great political and scientific changes in the early part of the 20th century. I was also interested in introducing elements of interactivity into my own artworks, and thought this might allow me to develop some skills to this end. There was also an interest in the general area of trying to somehow codify aesthetics, to see if there was some sort of algorithm for beauty, a question which fascinates me. Also, I was enjoying dicking about with programs to see if I could make them do cool stuff.

"Composition With Lines" (1917) Piet Mondrian
“Composition With Lines” (1917) Piet Mondrian

The first painting I focused my attention on was a famous Mondrian work, Composition With Lines, from 1917. I was advised to try an open-source Java-based animation tool called Processing, and began to work with it. Due to the relative formal simplicity of the painting, I was able to get a passable program working which allowed the user to generate their own version of the artwork by moving the mouse around while holding down a button. (The working interactive sketch is here https://dominicfee.info/uncategorized/albers-simulator/)  With a little research into other attempts to generate historical paintings with coding, I was surprised to find that I had been beaten to it by the American computer engineer A. Michael Noll, by about 50 years. In 1964 he had taken this exact painting and written a computer program to replicate it. Not only that, but according to research he did at the time, the majority of people he surveyed preferred his computer-generated artwork to the Mondrian original. The following pdf explains Noll’s methodology for the project, and draws some subjective comparisons between the genuine and simulated Mondrian paintings.

http://noll.uscannenberg.org/Art%20Papers/Mondrian.pdf

He makes some interesting observations on the fact that the simulated versions contained more randomness in the arrangements of the graphical elements than the genuine artwork as a result of the nature of the programming algorithms used. Despite this, the simulations created a more profound emotional response in the test subjects, causing Noll to speculate on the nature and perceived importance of the artist’s ability to manipulate and affect the emotional state of his audience.

 “Computer Composition With Lines” (1964) A. Michael Noll
“Computer Composition With Lines” (1964) A. Michael Noll

 

 

Nathan Bos – Scientific Collaborations

Why are scientific collaborations so difficult to sustain? It has been natural to think of scientists as being potentially really good at collaboration, but attempts to set up computer-based collaborative projects (collaboratories) within the scientific community haven’t been too successful. It seems while they are good collaborators, they function best in localised, face to face groups.

Three areas of difficulty are identified;

1. Knowledge (as opposed to mere information) is hard to transmit across distances. It’s much easier for a scientist to explain his ideas, which may be on the cutting edge of understanding, directly to a colleague than to someone over a computer network.

2. Scientists work independently most of the time. They are inclined to work to their own research and travel schedules.

3. They typically work for institutions, and there are traditionally difficulties with cross-institutional barriers. Legal issues may need to be resolved, and there is often a lot of protectiveness over intellectual property.

To help resolve these difficulties, The Science of Collaboratories (SOC) was a five-year project funded by the National Science Foundation (NSF) to study large-scale academic research collaborations across many disciplines. The goals were to compare different collaborative projects, develop theory about this emerging research form, and develop strategies for facilitating more successful projects in the future. They ended up coming up with a seven category taxonomy of collaboratories. Taxonomy is the science or technique of classification. (Why not just say “categories”? You see this is why we’ve got point 1. above.)

There follows a very long-winded and utterly riveting account of what defines a collaboratory, the kind of sampling techniques used, and a bit with “prototypicality” in it.

Of the 7 categories explained, I a few notable ones were;.

Shared Instrument: This category is set up to allow researchers to get access to expensive or normally inaccessible equipment. An example is given of twin telescopes in Hawaii, which due to their remote location can be accessed remotely from several subscribed universities. This kind of observation produces a very large amount of data which needs to be dealt with.

A Community Data System: An information resource that is created, maintained, or improved by a geographically-distributed community. The example is given of the Protein Data Bank (PDB) which processes and distributes 3d structure data of proteins and molecules. Interestingly, this project and ones like it often lead to great advances in 3d modelling and data visualisation techniques to deal with the large datasets produced.

Open Community Contribution System: This is a group of often geographically separated people who unite to work on a specific research problem. The interesting thing is that it often involves members of the general public, and encourages them to help deal with projects in the form of work, not necessarily contributing data. Wikipedia is given as an example, but it also reminds me of the usefulness of amateur astronomers, who by sheer strength of numbers can monitor large portions of the sky that professionals can’t, and have made many important discoveries.

 

Lawrence Lessig – The Future of Ideas

This was written in 2001, and the author is considering the future of the world wide web and isn’t very happy about how it’s currently going. As he sees it, the web’s position as a self-regulated, democratic source of creative energy and opportunity is in danger of being undermined by market and political forces which will seek to control and exploit it. Those who prospered in the pre-internet times are suspicious and nervous of its potential, and those who embrace its liberating aspects haven’t yet stepped up to organise a defence of its positive qualities.

“Free” is a big buzzword here, and a discussion of what the word means in this context. What kind of things are free, what are not and what arguably should be. There has been a growing culture of lawmakers putting legislation in place to protect the copyright of artists’ work, be it reproductions of artworks or corporate logos in films, musicians and artists being paid for the use of their work etc. While this is a positive, the fact that a culture has emerged of opportunistic people taking this idea way too far has undermined the creative  processes of many projects and made them logistically very difficult to see to conclusion. It’s this playing out of the age old idea of socialist versus capitalist mentality that the author argues will continue to spell trouble for the internet as we go forward.

Seems like a good book, would like to see it to the end if I get time. Would be interesting to see what 15 years worth of hindsight has done to the arguments.