Fashion Research Institute Oversees a Third Round of the Science Sim Land Grant Program with Intel Labs

New York, NY August 8, 2011 – Fashion Research Institute Oversees a Third Round of the Science Sim Land Grant Program with Intel Labs

Fashion Research Institute is pleased to announce the third round of OpenSim region grants in the ScienceSim grid. We will administrate the land program through our research collaboration with Intel Labs.

We’ve been provided with a set of regions running on hardware that can support 45,000-100,000  primitive objects with up to 1,000 concurrent users per region.  The regions will be awarded for a nine-month period beginning September 1, 2011 and running to June 30, 2012 to educators, scientists, and researchers who wish to bring their programs into an immersive collaborative environment.

There are no hidden charges or costs to this program other than what a selected organization is expected to need for the transfer and development of their programs, and which they negotiate with their service providers.  There is no financial assistance available for this process.  We can accept and transfer existing OAR files into ScienceSim.

Commercial organizations and consultants are not eligible to apply for these regions. Recipients must sign a formal legal agreement with Fashion Research Institute for use of these OpenSim regions. This agreement includes clauses stating that the recipient organization will respect the  existing Term of Service, End User licensing Agreement, Region Covenant, and Content Licenses of the ScienceSim grid.

The Fine Print

Each accepted organization will receive a 4-region, 2×2 ‘campus’ from September 1, 2011-June 30, 2012. Organizations must appoint a single user, who will receive estate manager privileges on this campus.

Campus assignees have full land right privileges.  Regions must remain open to common access to enable visitors to freely move around and visit.

Assigned campuses must be built on within three weeks of assignment.  Land which is not improved within four weeks of assignment will be reclaimed, and any objects placed in the region will be returned to the land assignee.

A content library of premium content is provided to all participants on ScienceSim.  Additional content is provided as well.  This content may not be removed from ScienceSim. Suspect pirated content brought into ScienceSim will be removed immediately. All content provided for ScienceSim users is PG-rated.

A complete OpenSim orientation gateway which has been successfully used with more than 65,000 new users is provided for the use of land grant recipients and their program users. A scripting lab is provided for recipients to learn how to develop OS scripts. Additionally, there are meeting, classroom, and sandbox spaces provided throughout the common space of the grid in the physics and math plazas which land grant recipients may freely use.

Expected Code of Behavior:

ScienceSim serves a population of educators, researchers and scientists.  Land grant recipients are expected to register with their real names and to manage their programs appropriately.

All users are expected to behave with decorum and respect to others to support this collaborative, interdisciplinary working environment.  Services are provided in English only.  All users who enter and use this grid are expected to behave and dress in a manner appropriate to a corporate or academic setting.  All users are expected to respect others’ beliefs; no solicitation, proselytization, foul language or harassment of any sort is allowed here.  Clothing is mandatory – this means at minimum, shirt and trousers that meets typical community decency standards.

Land grants are provided with an expectation that users will have sufficient expertise to develop their own regions.  There are weekly user meetings at which user experiences can and should be reported, as well as a mailing list where feedback is encouraged.  Lastly, there is a weekly governance meeting at which any conflicts will be arbitrated.

Participation

To participate in this land grant program, please send e-mail to admin@fashionresearchinstitute.com with your name, your organization, and 2-3 sentence description of the project you’d like to explore in this collaborative environment.  The program has rolling admissions and we will accept applications until we have assigned all campuses.

Past Awardees

Previous awardees are eligible to apply for this program.  Previous recipients have included the Abyss Observatory, the IDIA Lab, ScienceCircle, Meta-Institute for Computational Astrophysics, and Utah State University.

###

About Fashion Research Institute, Inc.: FRI is at the forefront of developing innovative design & merchandising solutions for the apparel industry.  They research and develop products and systems for the fashion industry that sweepingly address wasteful business and production practices.

Science Sim is part of an evolution toward online 3D experiences that look, act and feel real. Sometimes dubbed the “3D internet,” Intel Labs refers to this technology trend as immersive connected experiences, or ICE. ScienceSim is differentiated from most virtual world environments by its open source architecture. ScienceSim leverages open source building blocks (installation utilities, management tools, client viewers, etc.) based on OpenSimulator (OpenSim) software.

Advertisements

Fashion, Tech, Innovation: Using Avatars to Design Video Garment Imagery

Armed with our initial vision of a base garment that could essentially play videos or images on its surface, we’ve looked at some of the challenges that need to be addressed before this could become reality.

Last time we looked at how a video playback garment might be actually work. Now let’s wrap this up by talking a little about how designers would go about actually designing images and video that would play on the garment’s surface.

As we mentioned before, the human body is a solid 3-D object that we are trying to wrap a planar (flat) sheet around.  This is no different from our fashion classes, where we are given a few yards of muslin and told to drape a mannequin (flat, almost 2-D textile sheet, 3D mannequin object).  In moving from designing physical fashion to designing flat images to play on the video garment, we are doing much the same thing, except we are doing all of our draping on the image, not with the cloth.  This requires a slight change in how we go about draping, since what we will actually be draping on is the base video garment, and what we will be draping with are 2-d images.

And this is where the avatar comes in, since the process of draping a digital image onto a solid body requires a mannequin, in this case, an avatar.  At its simplest level, an avatar is nothing more than a digital representation of a human body.  We already know how to go about putting clothing onto human bodies, or at least we should have learned that at design school.

Taking our knowledge about draping onto the human body a step further, we simply need to substitute our expertise with Adobe’s Photoshop and Illustrator rather than pins, needles and scissors to drape the avatar not with textile, but with imagery.

Of course, like any new skill, it takes time and experience to get video garment images right, but a really nice aspect of designing for video garments is that the designer can create as many styles as she wishes, and she can ‘show off’ her design concepts using something like Black Dress Technology’s Virtual Runway™ service.  Unlike draping with textiles, draping pixels on an avatar mannequin does not require the production of costly physical samples.  You just design, upload it, watch the new style move on Virtual Runway, and then when the concept is approved, upload the design onto the base garment for approval.

Once the design is approved, it can be made available for licensing on any of a number of web sites or even via mobile apps! Think about it – you can really share your fashion sense with your besties simply by sending them a link.  Some designers may decide to open source their ‘basic’ video garment images and encourage their followers to customize their own designs.

Of course, it will be an interesting question whether or not the maker of such a video garment will try to use a proprietary file format instead of standard ones like jpg or png files.  Also, will the video garment be an open format, or closed format like the Kindle e-book reader? Amazon would no doubt love to get in the fashion game (everyone seems to want to be there, these days), and it would be entirely possible for them to come up with some version of a proprietary video garment, where they could sell the garment imagery just like they do e-books.

We would anticipate that the early video garments wouldn’t have the data or battery capacity to actually play video, but as the base technology improved and progressed, it would not be out of the question at all to eventually truly have video garments that play moving images over the surface. Imagine the possibilities: a formal gown that plays back images of moving sunlight and shadow dapple over a forest floor, or waves crashing eternally downward to froth and foam (virtually) at the wearer’s feet.   Think of the fun accessories designers could have developing product to complement such designs! Perhaps small scent pomanders contained in earrings or brooches, or tiny sound transistors with short loops of water waves or bird sound for a completely immersive experience, allowing the wearer to carry their own little environment with them.

The possibilities are endless.  All we need is for the materials sciences folks and the technology folks to catch up and give us the technology to do this.  Then we fashionable folk can take it from there.

Something Completely Visionary: Fashion, Tech, Innovation: UVW & XYZ

Armed with our initial vision of a base garment that could essentially play videos or images on its surface, let’s explore some of the challenges that need to be addressed before this could become reality.

Last time we looked at possible power sources for such a garment, including bettery textiles and other possible sources of power.  This time, let’s look at how a video playback garment might be actually work.

The human body is a 3-dimensional object, where we occupy a certain volume of space.  The space we occupy is defined by Cartesian coordinates, X, Y, and Z.  Cartesian coordinates begin at a ‘center point’, the precise placement of which is usually predetermined as a standard.  For most body scanners, the X, Y, and Z axes are oriented so the scanned figure stands on the XY plane (the floor), and the Z axis extends vertically from the feet to the top of the head, so that X = the width of the body from side to side, Y = depth, from front to back, and Z = height from the ground to the top of the head.

This is the sort of stuff that can make your brain explode but it’s also important, because in developing a video garment, the designer will need to be able to create a flat, 2-dimensional image (texture) which can be mapped to the X, Y, Z coordinates of the human body.

That flat, 2-dimensional image is also called a U, V, W map, where U maps to X coordinates, V maps to Y coordinates, and W maps to Z coordinates.  A designer needs to understand the ‘high points’ of the human body (e.g., the point of bust, shoulder, hip, and so on) so that as she develops a flat image to play on the video surface, she can begin adjusting the image in such a way to make sure the image wraps itself onto the video garment correctly, which will then, we hope wrap itself around the human body in such a way that it is both attractive, and yes, flattering.

And this is where the fun of it all comes in because at this point, the designer can begin to really play with her art.  Years of couture experience have taught us how to fool the eye with seam and trim placement; a good couturiere can make her client look 20 pounds lighter, and certainly feel like a princess. Imagine then, if you will, a couture designer being able to simply and easily create digital images that play on the video garments that allow their wearer to have access to the skills of the couturiere and to have their ‘off the rack’ digital designs easily adapted for their unique bodies.

Nest time, we delve in further to the importance of the avatar in developing for a video garment.

White Paper Available: Leveraging the Power of Virtual Worlds for Collaboration

New York, NY March 24, 2011 – Fashion Research Institute Publishes Latest Thought Piece: Leveraging the Power of Virtual Worlds for Collaboration by CEO Shenlei Winkler.

Fashion Research Institute CEO, Shenlei Winkler, announces that FRI’s latest publication, Leveraging the Power of Virtual Worlds for Collaboration, has been published.

Based on a presentation initially made in January 2008 to IBM Research North America, this whitepaper incorporates case studies drawn from FRI’s well-publicized collaborations in business, education and fashion, and focuses on some additional use cases.

Leveraging the Power of Virtual Worlds for Collaboration may be downloaded from the Fashion Research Institute web site.

About Fashion Research Institute, Inc.: The Fashion Research Institute is at the forefront of developing innovative design & merchandising solutions for the apparel industry.  They research and develop products and systems for the fashion industry that sweepingly address wasteful business and production practices. Shenlei Winkler’s work spans both couture and mass-market design and development for the real life apparel industry. A successful designer, her lifetime sales of her real life apparel designs have now reached more than $70 million USD, with more than 25 million-dollar styles in her portfolio. Her couture work has appeared extensively on stage and movie screen.

Mission Critical-PG Avatars for Corporate and Educational Use

There is a growing call from consumers of digital content and virtual goods for avatars that meet a PG-rating.  While many deeply immersed users of avatars would object strenuously at having their avatars de-sexualized, the audience for a less sexual avatar not only exists, but is vocal in their desires for an avatar that will allow their projects to proceed without emphasis on some of the more mature aspects of interpersonal interaction.

Such audiences include both enterprise and educational users, many of whom have specialized audiences that need to either be protected from exposure to mature avatars or who regard the avatar as a tool whose effectiveness will be hampered if it is too ‘hot’.

Users who have an audience that is underage will usually insist on a PG-rated avatar for a variety of reasons, not least of which is to reduce their legal exposure in the event that a user attempts to engage in inappropriate behaviors for their project.  Likewise, corporate audiences may simply wish to have reasonably attractive, high-quality avatars that are appropriate for everyone in the organization to use, from the entry level worker to the CEO.

Fashion Research Institute has been asked to develop such avatars for various organizations. Our work in developing these specialized avatars has shown that creating a premium PG-rated avatar appropriate for these clients is not as easy as simply welding a bathing suit onto the avatar skin.  There are additional considerations that must be taken into account, including the age, culture, and gender of the intended user base.

For example, in developing for Preferred Family Health Care, one of the requirements was that any clothing provided could not have bare midriffs or plunging cleavage.  Their user base is under the age of 18.  As any parent knows, this demographic can often be found in malls wearing plunging necklines and low-rider jeans; this is the fashion that is preferable to this age range.  However, the requirement was not that we provide what those users would want, but rather what the administrators of the program where the avatars would be used would want.

Likewise, when we developed both the Content Library as well as the shopping mall in ScienceSim, we focused on providing quality clothing and avatar customization that does not have plunging necklines, wife-beater tanks, low-cut waists, excessively long hair or overly made up skins.  The user base uses the OpenSim-based Sciencesim as a work tool to advance their research in data visualization and in other areas. Inappropriately sexualized avatars would be distracting to the real work, and would be inappropriate.

In thinking about the PG avatar, users may opt to have the avatar developed with PG skins, usually with some sort of modesty garment added (usually a bathing suit for all of the obvious reasons).  Less commonly, a client may ask to have skins with the ‘wobbly bits’ removed, but with no modesty garment. Clothing is generally modest with knee- to tea-length skirts for the women, and trousers and jeans for the men.  Tops are opaque for both genders, and where graphic Ts are provided, care is taken to use innocuous graphics. Jewelry and other accessories tend to be discreet – no Mr. T Bling, and above all, no trademarked goods unless a formal license has been obtained and permission granted to use the trademark in question.

A well-developed PG avatar will enable organizations to conduct their real business using virtual worlds without worrying about inappropriate visuals marring their programs.  A PG-avatar may even be regarded as a mission critical component for corporate and education use virtual world projects, especially those with mixed age demographics or those with underage under users.

As the number of entities entering virtual worlds to use them as formal work tools increases, so too will the need for premium PG-avatars, and for the development of best practices and standards that define both quality and rating. Fashion Research Institute has begun the process of developing such standards for its own content, which is developed following its existing product design and development methodology.

Fashion Through a New Lens: Avatars and Apparel

Those of us who work day-to-day in apparel often forget what ‘fashion’ is like for people not in the industry.  We forget, if we ever knew, that industry outsiders may not understand that there are always very real drivers and impetus to how fashion happens.  We look back as much as we look forward, and we analyze fashion trends and fashion disasters.  A critical difference, however, is fashion designers speak a language with vocabulary composed of color, shape, style, and form. Our stylistic judgments are made, and we begin talking about what we think will be ‘important’.  Others in the industry ‘get it’.  We don’t have to say much more than we ‘believe in it’ and we ‘think it is important’ and discussions then become very tactical to get the idea developed into a product.

Using that language to outsiders is akin to American tourists traveling in a non-English-speaking country.  We speak louder in the hopes that our audience will understand what we’re trying to say.  Sometimes, through a common hook, we’re able to communicate. But usually the experience is a handful of apparel industry personnel discussing whatever new concept excites them while the industry outsider tries to keep up by tossing in bits of wisdom they gleaned at Style.com or one of the other fashion web sites.

Attending the Japan Fashion Now exhibit at the FIT Museum had additional interest to us beyond being exposed to the latest in fashion development out of Japan. We were joined on this expedition by one of our colleagues at IBM, Aimee Sousa, who likes aspects of fashion (in particular boots), but she isn’t steeped in Fashion. And it was very interesting to us to watch Aimee’s first experience at an event that was very focused at apparel industry practitioners.

With the exception of the guests, the invitees were industry personnel and FIT alumni.  The presenter was Valerie Steele, a world-renowned fashion historian and thought leader in her space.  The language was our language, and Ms. Steele was presenting to us in our mutually-understood language.  It would not have been unlikely that the conversation that evening would have been not accessible to an industry outsider, and that she might have been less than captivated by the experience.

Instead, we had an opportunity to watch as the magic, romance, and passion of our industry, our product, our drive, was distilled and communicated in such a way as to captivate our colleague.  As we rewound the exhibit later that evening at a cocktail function, it was deeply satisfying and interesting to learn how after years of buying off-the-rack, our colleague suddenly ‘got’ that fashion has reasons for design and that we designers do not create in a void.  Rather, we are looking backwards at the past, while predicting the future, and living in the present.  Some of our best resources are still museums and old fashion periodicals, and our best guides are fashion historians and other designers, but at the same time we have also learned to use the many new digital resources available to us.

Watching Aimee’s induction into our language was a curious experience. She is still not immersed in the flow of our world, but she understands better now why we say brown is important, or we believe in cheetah (or denim, or silver, or whatever).

The experience also brought home to me again how tribal our fashion choices are, and how we choose to adorn our bodies is critical to reflecting our beliefs, our alignments, even in some cases our emotional state.  A critical question asked of me prior to the event was ‘what shall I wear?’  Naturally, the answer was ‘black’.  But that answer this set off a whole additional round of questioning: should I wear a dress, what about shoes, what sort of accessories?  My guest wanted desperately to align with our mores, to appear as an outsider at this very insider event.  She choose to do this by the clothing she selected to wear to the event, just like she chooses to align her avatar in virtual worlds with the different communities she belongs to.

There has been a rise of interest lately by corporations and educational organizations in providing attractive avatars for their virtual world projects.  this is not really a surprise to us in the Fashion Research Institute.  We have, after all, been researching the process of immersion and how people adapt their digital avatar representation with new ‘tribes’ or communities in digital spaces.  Moreover, as we were reminded recently at our fashion event at FIT in New York City, people’s desire to align with communities is a transcendent force.

Just like in the physical world where my colleague was flustered until we sorted out the o-so-important question of ‘proper dress’, so too in virtual worlds are people unable to focus on actual work and deeply immerse until they create a visual representation of themselves which they regard as acceptable.  Admittance to a group whether in the physical world or the digital realm is as close as adorning your avatar with the right clothing and accessories.

Acceptance, of course, requires rather more time for other community members to learn about who the person is.

But that initial tentative acceptance is lubricated by the strong visual cues created by the choices an avatar owner makes in dressing and customizing their avatar.  We saw this over and over again when we operated our official Linden Lab® Community Gateway region in Second Life®. After orienting and observing more than 65,000 new users of Second Life, we have good data on how to get new users quickly oriented to these new tools, and  how they learn to immerse.

Needless to say, we were so delighted to be joined by our colleague at the Japan Fashion Now exhibit.  Not only was the fashion fashionable and the company wonderful, but we were pleased to have a learning moment in our area of research.

Fashion Research Supplies Avatars for Science Sim Demo

On August 31, 2010, Intel Labs posted this video discussing the advances in scalability in the OpenSim platform of Science Sim.  Fashion Research Institute has the pleasure of collaborating by providing the avatars and apparel shown in the video.

“John” is the basic default corporate male avatar provided by Fashion Research Institute to ScienceSim.  John’s feminine counterpart, Jane, isn’t shown in this video.  Just like in the physical world, it seems the price of beauty presents interesting challenges – Jane’s hair, jewelry and other accessories have a much higher avatar rendering cost than John’s much more simple attire.

Jane will make her appearance at some point, however, along with the four other new default avatars being provided by Fashion Research Institute to ScienceSim as part of a corporate donation of a new content library to ScienceSim.

Creating and Visualizing 3D Content in Science Sim