ICM_Week 5_Objects/Arrays

For this week’s assignment I experimented with using classes, objects and arrays.
The idea was to have an array of squares fill up the entire canvas and make each one of them rotate with a slight delay.

Capture

I knew I had to use the Push/Pop feature to make the rectangles rotate independently but placing that feature in different parts of the code gave me different results.

Results of trial 1 –
placed push/Pop in the draw loop in sketch

20171011_015327

Result of trial 2 –
placed Push/Pop in the display function within the Class (Rectangle.js)

 

20171011_022107

trial 3 –

Couldn’t figure out how to create a grid using array (both x and y)
So far I could only manage arraying in x

 

20171011_042655

PC Week 3_Interaction design systems in public

Walmart self checkout

What do you think is the biggest pain point for Walmart’s self checkout system? ……
Correct, ‘Theft!’ It’s relatively easy to scam its simple system of barcode scan and item weight correlation program.There a tons of articles online offering a bouquet of strategies on scamming the self checkout system.

I went to walmart yesterday to experience it firsthand. There are several loopholes in the system and the overall UX seems very scattered. Each item took several minutes to scan and reflect in the shopping cart. This caused a lot of confusion especially while scanning multiple items at once. There is always the fear that you are scanning the same item multiple times and paying double. the whole interface needs to be organised and prioritised. Somehow our machine did not have audio assistance which made things worse. Some items did not even have a barcode and it’s no fun to spend 5 minutes figuring out the item number, bagging it and the machine reverts back with : “Unexpected item in bagging area. Remove item.”

The whole self checkout system UX sure had me thinking of ways to scam it in revenge!

IMG_20170930_191141995_BURST000_COVER_TOP

If I were to relook at the fantasy device as a self checkout system I would envision a simple system with minimal UI. It would simply be a large, weight sensitive table, perhaps like the Microsoft surface table on which you lay out all your items. A camera on top scans barcode/product images to correlate with the measured weight and send the items for bagging automatically.

WHAT IS PHYSICAL INTERACTION?

 

As I was approaching this question, I was wondering if I was walking into an analysis-paralysis situation by over thinking something so simple to answer. Isn’t interaction simply just communication between two people …..? I asserted, and found it pretty much futile to continue questioning further, but the train of thought had already begun.
…..Or is it interaction which sometimes takes place between a person and an object, like a boxer practicing his punches on a boxing bag? Or maybe between two objects, like our heart and lungs working in coordination, to keep the blood flowing and oxygenated….?
When a Mimosa Pudica herb closes its leaf in response to stimuli, is it interactive or merely reactive?
It may seem like new media creatives (myself included) would be quick to conclude ‘Interactive’, after all, if you were to artificially recreate a Mimosa herb using mechatronics, you would know what a long winded process it would be. From setting up the sensors, motors and linkages and programming all of the components to talk to each other to produce the desired result, is a commendable effort. Moreover, there are a plethora of new media projects (with similar interactivity quotient as that of a Mimosa herb) which respond to stimuli, rather quite one-dimensionally but are marketed as ‘Interactive art’, widely exhibited at ‘Interactive media art festivals’ and frequented by ‘Interaction designers and artists’.

So clearly there is a lot of buzz around interactive tech and media, but without a clear understanding of what it really means? In The Art of Interactive Design, Chris Crawford defines interaction as, a conversation between two actors who “listen, think and speak.” Physical interaction involves a give and take, and is not simply a reaction (for example, a viewer watching a movie) or participation (a couple dancing).

So Crawford would clearly dismiss the notion of a Mimosa herb being interactive and I agree with him.

I think it’s OK to declare that in the context of interactive design or interactive media, for something to be interactive, it has to work outside of repetitive instances of mere cause and effect, unless the effect stirs up another cause leading to a different effect. This act of give and take is much more dynamic, engaging and meaningful.

But I also wonder why Crawford chose to use the term ‘actors’ instead of people? Could it imply that the two entities are acting out a role in a given stage (context) where the interaction plays out? A context where there are unique rules, behavior and sign systems? Is an AI machine talking to another AI machine a case of interaction design? Because even though a person (Human) is only an audience to  such an AI driven chat show, it still takes a person (human) to design and engineer it. How then, would a person design such a platform for AI conversation? Should you design it to be decipherable for humans? Instead, what if you invent an entirely new language, much rich in semiotics in computer parlance for AI bots to communicate with several times more efficiently? Does it still hold up to the idea of interaction design?

As you can see, the more you attempt to define it, the more obscure or inadequate the definition starts to feel. It can perhaps only be understood within a certain framework (context) of declarations and objectives.

 

WHY IS THERE A NEED TO DEFINE WHAT INTERACTION IS?

There isn’t really, because there is perhaps no satisfactory answer. But in the process of trying to define it you understand the boundaries of the concept and the boundary can be thought of as both limiting and expansive. Moreover, you will have some objective parameters against which you can judge your design, and lastly and more importantly, the process of inquiry gives you better informed visions of the future of interactivity!

Bret Victor, in A brief rant on the future of interaction design says, “I believe that our hands are the future!” It’s quite a fun way of saying that interaction design as we know it, is under-utilizing the capabilities of the human body. Our dexterous hands are capable of much complex and nuanced gestures than simply swiping and tapping information on a sheet of glass. Bret has a term for it – ‘ pictures under glass’.

You can think of interactivity as a long string of information exchange at its most fundamental level and if you observe the different ways in which nature presents information, you can imagine a much richer future of interaction design which extends far beyond the screen, incorporating sights, sounds, smells, taste, touch and everything that could appeal to our senses!

This is indeed a beautiful idea and it has been explored previously as well. I appreciate Bret’s essay for putting forth a case for a more organic and immersive interface design, which fully exploits and extends human capabilities but at the same time, it is also frustrating to not find any answers for achieving that level of interactivity. I appreciate the detailed and well documented presentation on the dexterity of the human hand, but all of those gestures are very one dimensional and meant for specific actions.

Let me explain, I think every rosy idea should be tested in practical terms as an early prototype, even if in form of a simple thought experiment. So I wondered; if I were to design an enterprise software, in which the user is presented with several layers of information and has to navigate several pages with multiple buttons to click and drop down menus to select, how much of our sensory or bio-mechanical capability should I exploit to control the interface? Should I design the software to accept as input, the full spectrum of hand gestures, from grips to pinch to scissor to flicks? OR, Should you smell Lavender, every time you get an email? OR Should I engineer my laptop’s enclosure to slouch as it starts losing power………?

No doubt, the whole idea starts to feel quite comical!

In fact, it seems as if dealing with highly complex tasks of information handling and processing requires a much simpler control, a lowest common denominator of input actions.
I wonder if people would want to perform awkward ‘hand/body gymnastics’ for navigating facebook! It’s seems to contradict the idea of interactivity as a tool. For a tool, like a hammer, is meant to have an easy and comforting interface (handle where you grip the tool) and leverage our bio-mechanics with amplified results on the output side (hammerhead).
So perhaps what is needed, is simply a minimal tool box of effortless interactions capable of manipulating a large landscape of information.

I don’t mean to disregard Brets’s ideas, but rather I think we need a paradigm shift in the way we process, display and control information to make organic interactivity possible. Something which has to be fueled by developments across tech, manufacturing, engineering and commerce.

I know I too have raised many questions without providing any answers, but I am going to attempt to do just that, at ITP and beyond!

Computational media and my first sketch with P5.js

Computational media simply means, using computational power to manipulate tangible or intangible media.

On the first day of ICM class last week, we discussed the possibilities of computational media and some moral conundrums in technological progress. This was illustrated using a thought experiment, in which an automated car heading towards an impending collision has to decide between hitting either a group of people or a single person!
As a philosophical problem, it is seemingly unanswerable, as one would get lost in a moral labyrinth the moment one starts to put conditions on who the unfortunate people/person could be. For example, the group of people could be fugitive criminals and the individual person could be a kind philanthropist.
Should you then choose to self-destruct? And what if the passengers are innocent kids?
Anyway, as a designer, I would resist indulging in philosophical puzzles and instead treat this conundrum as a ‘use case’ to think of possible ways to solve these problems. Mega issues like these which lie in the civil domain need intervention on multiple levels, including infrastructure development, city planning, policy making, civil engineering, etc. In a well-designed world, such a dilemmic situation shouldn’t arise. Perhaps in the future, all of our road infrastructure will be tiered with separate highways for different modes of transport or maybe separate concealed lanes for driverless cars.
Computational power is sufficiently developed for driverless cars to watch traffic or obstacles much further in space by using satellite imagery, and algorithmically calculate their trajectories ahead of time to avoid collision.

In the future, I wish to be in a position where I’ll get to deal with problems of such magnitude, both in terms of scale and complexity. I am fascinated by the physical scale of things for its immersive/emotive impact and a kind of surreal experience it sometimes creates. I particularly recall two new-media projects dealing with scale on a macro and micro level respectively-

Squidsoup is a digital arts group from UK, they often combine sound, space and virtual worlds to create immersive experiences. Pictured above is one of their famous pieces called submergence. It is a monumental 3D grid of LEDs which can visualize animations in 3Dimensions. Follow the link above to see it in action!

 

A weird yet brilliant project using a cheap EEG device and Arduino to control paramecium bacteria.
Sinister or not, is open to debate but I find it fascinating to see technology being used to manipulate our natural world with sheer will, in real time!

 

Having a product design background, I am also interested in using code to shape 3D form. I love the work of Zaha Hadid studio that uses generative design to conceptualize gigantic buildings which are extremely fluid and organic. Nervous System is another one of my favorite studios which combine organic generative design and digital fabrication methods.

 

First P5.js sketch

So with big hopes and lot of excitement I took baby steps toward mastering Javascript by making my first ever sketch using P5.js. I thoroughly enjoyed the process but was a bit annoyed with having to gauge where the shapes would lie in a coordinate system, mainly because the coordinate points were at the scale of pixels! I am too used to using Photoshop for visual design work so the lack of an intuitive interface for arranging visual elements was a bit frustrating!

Although, later I discovered a simple workaround to this issue on Carrie’s blog which is to simply draw the shapes in illustrator first to get the exact coordinates.
Capture3