As the fields for applied visualization modeling continue to broaden, for both art-based and science-based endeavors, there is a subsequent requirement to more specifically define and support these endeavors. PJIM publishes articles that assist all practitioners in finding a common language from which to launch innovative visual solutions derived from hard data or from fluid concepts. This yields a wide cross-over of submissions to PJIM — in some respects, it is our goal to bring the computational and processing side to the arts, and conversely, to bring the abstract, experimental, and intuitive aspects to the sciences. For the initiation of our sixth year of PJIM we have chosen four articles that truly reflect this expanse of information/visualization potential: from the most esoteric and (we might say, philosophical) to the most pragmatic and applicable.
We look at ideas of Behavioral Art and how art can be applied as tool and not, per se, an aesthetic. From here we move to data-media-devices as potentially full adjuncts to human perception; this is demonstrated through a cross-over performance/experiment endeavor that has useful implications toward practical application. It is a kind of flying-by-instruments model for the intrepid pedestrian. Our third article considers computational, and highly specific visualization modeling; the authors look at visualization at the pixel relevancy level — Abstract Rendering that corresponds to the relevancy of each pixel is undertaken through supplied formula. Our fourth article is derived from DARPA sponsored research, it considers the value of improving highly integrated analysis of the parts and sequences of visualization — from data through users — and suggests how metrics of effectiveness can be increasingly objective when working with big data. It is both a powerful primer on big data exploitation, and a leverage toward quantifying usability metrics. All of our authors struggle with the pursuit of deriving knowledge from what things look like, or conversely, the desire to render something that informs us about the essence of what lies behind the aesthetic.
Jihoon Kang, Publisher, and William Bevington, Editor-in-Chief
Parsons Journal for Information Mapping
by Judson Wright, MA
by Paul Taylor, MFA
by Joseph A. Cottam, PhD, Peter Wang
by Jana Schwartz, PhD, Joshua C. Poore, PhD, Emily Vincent, MS, David Reed, MS
Behavioral art, communication, gestalt, meaning-making, metaphor, mythology
Consider Behavioral Art (BA) not as a thing, nor as an objet d’art, but as a function. If we do so, we must make a sharp distinction between the artifact passively on display, imbued with a priori meaning as matter of unfounded faith in a Platonism, labeled only synthetically by its contextualizing environment (e.g., an art gallery, mathematic equation, or grammar), on the one hand, and the active process of creative experience, whether of the artist or audience member where meaning is a cognitive assembly process, as in Constructivism, on the other. Though the mind clearly instigates this process, it fundamentally involves the coordination of gross and fine motor skills, well beyond the eye rapidly reorienting the fovea. Furthermore, these art-systems are generally facilitated by execution of formalized logical syllogisms, which implies code, further implicating computer art. Thus, an important aspect of BA is “borrowing intelligence” from a humanly organized source (e.g., a painting), applying it to computation, and re-rendering it in a humanly detectible modality. In this scheme, BA exploits our species’ unique impulse to attach meanings to stimuli. The embodiment of metaphorical concepts we can finally identify as mapping.
Judson Wright is a computer programmer who studies cognition. His software experiments/artwork have been featured extensively around the world and on the web since 1996. He graduated from Brown University and has an MA from the Interactive Telecommunications Program at New York University.
GPS tracking, performance art, collaboration, mobile applications, extremist interface
Digital Trust Hike is the title of a performance piece conducted in 2012. Walking the length of Taylor Street in San Francisco without looking up from my smartphone, I donned a foam cube that acted as a safety helmet, making me highly visible to others while limiting my field of view to the phone in my hand. I navigated via Google Maps, recorded various data with a GPS tracker, listened to music to drown out sound, and, using Twitter, I live-tweeted my progress I had collaborators stationed at every intersection. When they saw me coming, they called me and talked me safely across the street. While I was immersed in digital experience, I outsourced the act of paying attention to physical reality—vehicles, other pedestrians, buildings—to my social network. Digital Trust Hike was extensively documented. I encouraged my collaborators to shoot photos and video with their phones during the hike, resulting in hundreds of still images and a roughly three minute edited video with sound. I also have data gathered from a GPS tracking app, including my speed and elevation over time as well as a route line.
Paul Taylor is a multimedia artist based in the San Francisco Bay area. His work explores the effects of increased digital immersion on our perceptions of and interactions with our surroundings and each other, and has been included in group exhibitions and screenings throughout the United States and abroad.
Abstract rendering, alpha composition, color weaving, data transformation, highalpha, homoalpha, overplotting, pixel-level effect, visual mapping
The fundamental premise of visualization is that a useful correspondence between pixels and data can be built. However, visualization programs rarely operate at the pixel level. Instead of the discrete, finite space of pixels, the most common visualization models work with canvases of floating-point coordinates and geometric shapes. Preserving some source data link all the way down to the pixel level provides many opportunities for improving visualization. This paper describes Abstract Rendering (AR), a framework that preserves the data-to-pixel link. Using this pixel-level link, AR is used to provide a unique control over the final visual representation of data sets at all scales and from a variety of visualization technique families.
Joseph A. Cottam is a postdoctoral researcher at Indiana University. His work focuses on systems and programming languages to support data analysis and visualization. Peter Wang is the president and co-founder of Continuum Analytics. He has worked on data analysis and visualization products for the past ten years.
Big data analysis tools, metrics, cognitive load, task performance, applied psychophysiology, engagement, user experience, user-centered
A big data visualization system is comprised of a diverse set of components. Obvious components include the data set, an analytic acting on the data set, and the visualization of the data set and/or results from the analytic, with one or more of each of these components in the system. The consumer, or user, of the visualization is a less obvious component of this system, but is nonetheless critical to understand in gauging system performance. There are well-defined, quantitative metrics to describe the data set (how big?, what kind of data?), the analytic (how quickly does it converge?), the visualization (how over-plotted?), and the IT infrastructure (how much memory?). However, the user operates within an additional set of constraints provided by her context—the performance of the supporting IT infrastructure, time constraints, distractions, etc. As such, understanding the user-in-context has typically employed subjective and qualitative approaches (did you enjoy the presentation?) rather than objective and quantitative as seen in the other measures of the system. Such approaches result in inappropriate holistic assessments of systems with some components measured quantitatively (and often in great detail) while others are measured qualitatively, altogether resulting in unforeseen potential risks in overall system performance. Draper Laboratory is developing quantitative, non-subjective measures to describe system-level performance within and between composed big data systems. These measures leverage classical psychometrics, but extend to include dynamic behaviors and physiologic responses. We have previously developed a measure of Engagement, and are currently working to apply that, and related measures, to assess the performance of big data analysis systems.
Dr. Jana Schwartz is a Principal Member of the Technical Staff at Draper Laboratory and Group Leader for Human-Centered Engineering. She has interests in human/system collaborative operations, complexity theory, and the use of real-time psychophysiological response for closed-loop performance, and is the PI of Draper’s XDATA team. Dr. Joshua Poore is a Senior Member of the Draper Laboratory Technical Staff and experimental psychologist. He has applied this work to developing comprehensive metrics for user states (i.e., immersion, proficiency) within human computer interfaces, including virtual environments, simulations, gaming, and analytic tools. Ms. Emily Vincent is a Senior Member of the Technical Staff in Human-Centered Engineering at Draper Laboratory. She has led human-system collaboration and software tasks for intelligence community and SOCOM programs, including the development of the Next Generation BAO Kit roadmap and TacDroid™ prototype. Mr. David Reed is a Member of the Technical Staff at Draper Laboratory. He has interests in data visualization, human/system collaborative systems, signal processing, machine learning, and data science.