Dr. Philipp Fechteler

Research Associate

Contact

Mailing address:
Fraunhofer HHI
Einsteinufer 37
10587 Berlin
Germany

Visitor address:
Room 04-17
Otto-Dibelius-Straße/Salzufer 6
10587 Berlin

Phone: +49-(0)30-31002-616
FAX: +49-(0)30-31002-190
E-mail:

Publications

journal A. Hilsmann, P. Fechteler, W. Morgenstern, W. Paier, I. Feldmann, O. Schreer, P. Eisert @ IET Journal on Computer Vision
Going beyond Free Viewpoint: Creating Animatable Volumetric Video of Human Performances
[Video 720p/55MB]   [Video 1080p/321MB]   [WWW]  


Abstract: In this paper, we present an end-to-end pipeline for the creation of high-quality animatable volumetric video content of human performances. Going beyond the application of free-viewpoint volumetric video, we allow re-animation and alteration of an actor’s performance through (i) the enrichment of the captured data with semantics and animation properties and (ii) applying hybrid geometry- and video-based animation methods that allow a direct animation of the high-quality data itself instead of creating an animatable model that resembles the captured data. Semantic enrichment and geometric animation ability is achieved by establishing temporal consistency in the 3D data, followed by an automatic rigging of each frame using a parametric shapeadaptive full human body model. Our hybrid geometry- and video-based animation approaches combine the flexibility of classical CG animation with the realism of real captured data. For pose editing, we exploit the captured data as much as possible and kinematically deform the captured frames to fit a desired pose. Further, we treat the face differently from the body in a hybrid geometry- and video-based animation approach where coarse movements and poses are modeled in the geometry only, while very fine and subtle details in the face, often lacking in purely geometric methods, are captured in video-based textures. These are processed to be interactively combined to form new facial expressions. On top of that, we learn the appearance of regions that are challenging to synthesize, such as the teeth or the eyes, and fill in missing regions realistically in an autoencoder-based approach. This paper covers the full pipeline from capturing and producing high quality video content, over the enrichment with semantics and deformation properties for re-animation and processing of the data for the final hybrid animation.
conference A. Hilsmann, P. Fechteler, W. Morgenstern, S. Gül, D. Podborski, C. Hellge, T. Schierl, P. Eisert @ KuI 2020
Interactive Volumetric Video Rendering and Streaming
thesis Philipp Fechteler
Multi-View Motion Capture based on Model Adaptation, Dissertation, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, November 2019.
[Bibtex]   [WWW]  


Abstract: Photorealistic modeling of humans in computer graphics is of special interest because it is required for modern movie- and computer game productions. Modeling realistic human models is relatively simple with current modeling software, but modeling an existing real person in detail is still a very cumbersome task. This dissertation focuses on realistic and automatic modeling as well as tracking human body motion. A skinning based approach is chosen to support efficient realistic animation. For increased realism, an artifact-free skinning function is enhanced to support blending the influence of multiple kinematic joints. As a result, natural appearance is supported for a wide range of complex motions. To setup a subject-specific model, an automatic and data-driven optimization framework is introduced. Registered, watertight example meshes of different poses are used as input. Using an efficient loop, all components of the animatable model are optimized to closely resemble the training data: vertices, kinematic joints and skinning weights. For the purpose of tracking sequences of noisy, partial 3D observations, a markerless motion capture method with simultaneous detailed model adaptation is proposed. The non-parametric formulation supports free-form deformation of the model’s shape as well as unconstrained adaptation of the kinematic joints, thereby allowing to extract individual peculiarities of the captured subject. Integrated a-prior knowledge on human shape and pose, extracted from training data, ensures that the adapted models maintain a natural and realistic appearance. The result is an animatable model adapted to the captured subject as well as a sequence of animation parameters, faithfully resembling the input data. Altogether, the presented approaches provide realistic and automatic modeling of human characters accurately resembling sequences of 3D input data.
journal P. Fechteler, A. Hilsmann, P. Eisert @ CGF
Markerless Multiview Motion Capture with 3D Shape Model Adaptation, Computer Graphics Forum 38(6), pp. 91-109, March 2019.
[Bibtex]   [WWW]   [Video 1080p/82MB]   [Video 720p/8MB]  


Abstract: In this paper, we address simultaneous markerless motion and shape capture from 3D input meshes of partial views onto a moving subject. We exploit a computer graphics model based on kinematic skinning as template tracking model. This template model consists of vertices, joints and skinning weights learned a priori from registered full‐body scans, representing true human shape and kinematics‐based shape deformations. Two data‐driven priors are used together with a set of constraints and cues for setting up sufficient correspondences. A Gaussian mixture model‐based pose prior of successive joint configurations is learned to soft‐constrain the attainable pose space to plausible human poses. To make the shape adaptation robust to outliers and non‐visible surface regions and to guide the shape adaptation towards realistically appearing human shapes, we use a mesh‐Laplacian‐based shape prior. Both priors are learned/extracted from the training set of the template model learning phase. The output is a model adapted to the captured subject with respect to shape and kinematic skeleton as well as the animation parameters to resemble the observed movements. With example applications, we demonstrate the benefit of such footage. Experimental evaluations on publicly available datasets show the achieved natural appearance and accuracy.
conference P. Fechteler, L. Kausch, A. Hilsmann, P. Eisert @ ICIP 2018
Animatable 3D Model Generation from 2D Monocular Visual Data, Proceedings of the 25th IEEE International Conference on Image Processing, Athens, Greece, 7th - 10th October 2018.
[Bibtex]   [PDF]   [PPTX]  


Abstract: In this paper, we present an approach for creating animatable 3D models from temporal monocular image acquisitions of non-rigid objects. During deformation, the object of interest is captured with only a single camera under full perspective projection. The aim of the presented framework is to obtain a shape deformation model in terms of joints and skinning weights that can finally be used for animating the model vertices. First, the monocular rigid shape estimation problem is solved by computing a template model of the object in rest pose from an image sequence. Next, the unknown external camera parameters and the deformation for each vertex are estimated alternately in a sequential approach. The resulting consistent non-rigid shape geometries are used to compute a kinematic skeleton control structure including skinning weights and optimized shape. For that, a completely data-driven optimization scheme is used, which iterates over three steps: (a) optimization of pose for each frame as well as joint parameters consistent over the entire sequence, (b) optimization of rest pose vertices to enhance the shape and (c) optimization of skinning weights for improved deformation characteristics. With experimental results on publicly available synthetic as well as real-world datasets, we demonstrate the quality of the proposed approach. The resulting models with fixed topology and rigged with skeleton and skinning weights can be animated in existing render engines.
conference P. Fechteler, W. Paier, A. Hilsmann, P. Eisert @ ICIP 2016
Real-time Avatar Animation with Dynamic Face Texturing, Proceedings of the 23rd IEEE International Conference on Image Processing, Phoenix, Arizona, USA, 25th - 28th September 2016.
[Bibtex]   [PDF]   [PPTX]  


Abstract: In this paper, we present a system to capture and animate a highly realistic avatar model of a user in real-time. The animated human model consists of a rigged 3D mesh and a texture map. The system is based on KinectV2 input which captures the skeleton of the current pose of the subject in order to animate the human shape model. An additional high-resolution RGB camera is used to capture the face for updating the texture map on each frame. With this combination of image based rendering with computer graphics we achieve photo-realistic animations in real-time. Additionally, this approach is well suited for networked scenarios, because of the low per frame amount of data to animate the model, which consists of motion capture parameters and a video frame. With experimental results, we demonstrate the high degree of realism of the presented approach.
conference P. Fechteler, A. Hilsmann, P. Eisert @ EUROGRAPHICS 2016
Example-based Body Model Optimization and Skinning, Proceedings of the 37th Annual Conference of the European Association for Computer Graphics, short paper, Lisbon, Portugal, 9th - 13th May 2016.
[Bibtex]   [PDF]   [PPTX]   [Video A]   [Video B]   [Video C]  


Abstract: In this paper, we present an example-based framework for the generation of a realistic kinematic 3D human body model that optimizes shape, pose and skinning parameters. For enhanced realism, the skinning is realized as a combination of Linear Blend Skinning (LBS) and Dual quaternion Linear Blending (DLB) which nicely compensates the deficiencies of using only one of these approaches (e.g. candy wrapper, bulging artifacts) and supports interpolation of more than two joint transformations. The optimization framework enforces two objectives: resembling both shape and pose as closely as possible by iteratively minimizing the objective function with respect to (a) the vertices, (b) the skinning weights and (c) the joint parameters. Smoothness is ensured by using a weighted Laplacian besides a typical data term in the objective function, which introduces the only parameter to be specified. With experimental results on publicly available datasets we demonstrate the effectiveness of the resulting shape model, exposing convincing naturalism. By using examples for the optimization of all parameters, our framework is easy to use and does not require sophisticated parameter tuning or user intervention.
conference P. Fechteler, W. Paier, P. Eisert @ ICIP 2014
Articulated 3D Model Tracking with on-the-fly Texturing, Proceedings of the 21st IEEE International Conference on Image Processing, Paris, France, 27th - 30th October 2014.
[Bibtex]   [PDF]   [PPTX]  


Abstract: In this paper, we present a framework for capturing and tracking humans based on RGBD input data. The two contributions of our approach are: (a) a method for robustly and accurately fitting an articulated computer graphics model to captured depth-images and (b) on-the-fly texturing of the geometry based on the sensed RGB data. Such a representation is especially useful in the context of 3D telepresence applications since model-parameter and texture updates require only low bandwidth. Additionally, this rigged model can be controlled through interpretable parameters and allows automatic generation of naturally appearing animations. Our experimental results demonstrate the high quality of this model-based rendering.
workshop D.A. Mauro et al @ HOT3D 2013
Advancements and Challenges towards a Collaborative Framework for 3D Tele-Immersive Social Networking, Proceedings of the 4th IEEE International Workshop on Hot Topics in 3D, San Jose, California, USA, 15th July 2013.
[Bibtex]   [PDF]  


Abstract: Social experiences realized through teleconferencing systems are still quite different from face to face meetings. The awareness that we are online and in a, to some extent, lesser real world are preventing us from really engaging and enjoying the event. Several reasons account for these differences and have been identified. We think it is now time to bridge these gaps and propose inspiring and innovative solutions in order to provide realistic, believable and engaging online experiences. We present a distributed and scalable framework named REVERIE that faces these challenges and provides a mix of these solutions. Applications built on top of the framework will be able to provide interactive, truly immersive, photo-realistic experiences to a multitude of users that for them will feel much more similar to having face to face meetings than the experience offered by conventional teleconferencing systems.
conference P. Fechteler et al @ Mirage 2013
A Framework for Realistic 3D Tele-Immersion, Proceedings of the 6th International Conference on Computer Vision / Computer Graphics Collaboration Techniques and Applications, Berlin, Germany, 6th - 7th June 2013.
[Bibtex]   [PDF]  


Abstract: Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite different from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experience of talking in person. Several causes for these differences have been identiffed and we propose inspiring and innovative solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational experience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic experiences to a multitude of users that for them will feel much more similar to having face to face meetings than the experience offered by conventional teleconferencing systems.
journal A. Hilsmann, P. Fechteler, P. Eisert @ Eurographics 2013
Pose Space Image Based Rendering, Computer Graphics Forum 32(2), pp. 265-274, Proceedings of the 34th Annual Conference of the European Association for Computer Graphics, Girona, Spain, 6th - 10th May 2013.
[Bibtex]   [PDF]   [WWW]  


Abstract: This paper introduces a new image-based rendering approach for articulated objects with complex pose-dependent appearance, such as clothes. Our approach combines body-pose-dependent appearance and geometry to synthesize images of new poses from a database of examples. A geometric model allows animation and view interpolation, while small details as well as complex shading and reflection properties are modeled by pose-dependent appearance examples in a database. Correspondences between the images are represented as mesh-based warps, both in the spatial and intensity domain. For rendering, these warps are interpolated in pose space, i.e. the space of body poses, using scattered data interpolation methods. Warp estimation as well as geometry reconstruction is performed in an offline procedure, thus shifting computational complexity to an a-priori training phase.
workshop P. Fechteler, A. Hilsmann, P. Eisert @ VMV 2012
Kinematic ICP for Articulated Template Fitting, Proceedings of the 17th International Workshop on Vision, Modeling and Visualization, Magdeburg, Germany, 12th - 14th November 2012.
[Bibtex]   [PDF]   [Poster PDF]  


Abstract: In this paper, we present an efficient optimization method to adapt an articulated 3D template model to a full or partial 3D mesh. The well-known ICP algorithm is enhanced to fit a generic template to a target mesh. Each iteration jointly refines the parameters for global rigid alignment, uniform scale as well as the rotation parameters of all joint angles. The articulated 3D template model is based on the publicly available SCAPE dataset, enhanced with automatically learned rotation centers of the joints and Linear Blend Skinning weights for each vertex. In two example applications we demonstrate the effectiveness of this computationally efficient approach: pose recovery from full meshes and pose tracking from partial depth maps.
conference P. Fechteler, P. Eisert @ CVMP 2011
Recovering Articulated Pose of 3D Point Clouds, Proceedings of the 8th European Conference on Visual Media Production, London, UK, 16th - 17th November 2011.
[Bibtex]   [PDF]  


Abstract: We present an efficient optimization method to determine the 3D pose of a human from a point cloud. The well-known ICP algorithm is adapted to fit a generic articulated template to the 3D points. Each iteration jointly refines the parameters for rigid alignment, uniform scale as well as all joint angles. In experimental results we demonstrate the effectiveness of this computationally efficient approach.
conference P. Fechteler, B. Prestele, P. Eisert @ CVMP 2010
Streaming Graphical Content for Highly Interactive 3D Applications, Proceedings of the 7th European Conference on Visual Media Production, London, UK, 17th - 18th November 2010.
[Bibtex]   [PDF]  


Abstract: We present a video streaming solution to provide fluent remote access to highly interactive 3D applications, such as games. To fulfill the very low delay and low complexity constraint for this class of applications, several optimizations have been developed. Image preprocessing is implemented on the graphics card to make efficient reuse of the rendered output, as well as the GPU's parallel processing capabilities. H.264/AVC video encoding is accelerated by extracting additional information from the rendering context, which allows for direct calculation of motion vectors and partitioning of macroblocks, thereby omitting the demanding search of generic video encoders. A highly optimized client software has been developed to provide very low delay playback of streamed video and audio, using minimum buffering. In experiments a hardly noticeable delay of less than 40 ms could be achieved.
magazin R. Austinat, P. Fechteler, H. Gieselmann @ c't 21/2010
Über den Wolken - Wie Cloud-Gaming den Spielemarkt revolutioniert, c't, Heise Verlag, Ausgabe 21/2010, Seite 76-83.
[Bibtex]   [PDF]  


Abstract: Künftig sollen Spiele gar nicht mehr installiert werden. Sie laufen auf riesigen Server-Farmen im Internet, die nur noch Bild und Ton zum Spieler streamen. Eine ganze Wolkenfront von Cloud-Diensten kommt auf Europa zu und nimmt Herstellern von Grafikkarten und aufgemotzten Gaming-PCs die Sonne.
summit B. Prestele, P. Fechteler, A. Laikari, P. Eisert, J.-P. Laulajainen @ NEM Summit 2010
Enhanced Video Streaming for Remote 3D Gaming, Proceedings of the Networked & Electronic Media Summit, Barcelona, Spain, 13th - 15th October 2010.
[Bibtex]   [PDF]  


Abstract: Will be added.
journal P. Fechteler and P. Eisert @ FKTG Journal
Effizientes Streamen interaktiver 3D-Computerspiele, Fernseh- und Kinotechnik, FKTG, Ausgabe 10/2010, Seite 515-519.
[Bibtex]   [PDF]  


Abstract: Zur performanten Unterstützung unterschiedlichster Endgeräte sind dazu zwei Streaming-Methoden entwickelt worden. Beim Graphics-Streaming werden direkt die Grafikbefehle an Endgeräte mit Grafikprozessor übertragen, um dort das Bild dem Display angepasst zu rendern. Verschiedene in diesem Zusammenhang durchgeführte Optimierungen wie intelligentes Caching, Codierung und Emulation des Grafikkontextes führten zur Reduzierung der Datenrate um 80%. Alternativ, für Endgeräte ohne GPU, wird der visuelle Output als Video codiert \FCbertragen. Basierend auf zus\E4tzlichen Informationen aus dem Render-Kontext wurde daf\FCr der rechenintensive Videocodierprozess optimiert: im Durchschnitt wurde eine Beschleunigungen von rund 25% erzielt. Im Artikel wird diese Plattform zum Streamen von interaktiven 3D-Computerspielen vorgestellt.

This article describes a streaming platform for interactive 3D computer gaming. Two streaming methods have been developed to support a wide range of devices. Graphics streaming is used to stream graphics commands directly to the graphics processor incorporated in each end device. The image is then rendered within the processor to match the connected display. Several optimisations have been developed, such as intelligent caching, entropy-coding and local emulation of graphic context, and an optimised encoding process that exploits additional render context information is used to achieve average accelerations of around 25 percent.
conference P. Fechteler and P. Eisert @ ICIP 2010
Accelerated Video Encoding Using Render Context Information, Proceedings of the 17th IEEE International Conference on Image Processing, Hong Kong, China, 26th - 29th September 2010, pp. 2033-2036.
[Bibtex]   [PDF]  


Abstract: In this paper, we present a method to speed up video encoding of GPU rendered 3D scenes, which is particularly suited for the efficient and low-delay encoding of 3D game output as a video stream. The main idea of our approach is to calculate motion vectors directly from the 3D scene information used during rendering of the scene. This allows the omission of the computationally expensive motion estimation search algorithms found in most of today's video encoders. The presented method intercepts the graphics commands during runtime of 3D computer games to capture the required projection information without requiring any modification of the game executable. We demonstrate that this approach is applicable to games based on Linux/OpenGL as well as Windows/DirectX. In experimental results we show an acceleration of video encoding performance of approximately 25% with almost no degradation in image quality.
conference A. Laikari, P. Fechteler, B. Prestele, P. Eisert and J.-P. Laulajainen @ 3DTV-CON 2010
Accelerated Video Streaming for Gaming Architecture, Proceedings of 3DTV Conference, Tampere, Finnland, 7th - 9th June 2010.
[Bibtex]   [PDF]  


Abstract: Computer game processing requirements for the CPU and graphics performance are growing all the time higher. At the same time many low cost and modest performance CE devices are gaining popularity. People are already used to mobile life style inside home and on the go and they want to enjoy entertainment everywhere. This paper describes an accelerated video streaming method used in a gaming system, called Games@Large, which enables heavy PC game playing on low cost CE devices without the need of game software modification. The key innovations of the Games@Large system are game execution distribution, game streaming, including graphics/ video and audio streaming and game control decentralization as well as network quality of service management. This paper concentrates on the advanced video streaming techniques used in Games@Large.
talk P. Fechteler @ FKTG 2010
Ubiquitous 3D Computer Gaming, 24. Fachtagung der Fernseh- und Kinotechnischen Gesellschaft e.V., Hamburg, Germany, 20th May 2010.

Abstract: A framework has been developed to support 3D computer gaming for a wide range of types of end devices, ranging from small mobile devices over settop boxes to high end computers. This system developed with the European project Games@Large targets at providing a platform for remote gaming in home, hotel, internet cafes and other local environments. The platform provides
- game execution on server without any local installation
- support of unmodified of-the-shelf computer games
- multiple parallel game sessions
- different real time streaming methods
The main focus of this presentation is on the streaming methods. To support as many end devices as possible with an enjoyable gaming experience, two streaming methods have been developed. For end devices with GPU, the graphics commands are transmitted and the client renders them optimized for its screen resolution. The advantage of this approach is that even large screens can be served without high bit rates. Additionally, the render commands are transmitted when the game emits them before the full scene is rendered which reduces overall delay. In order to achieve real time execution with very low delay, several optimizations have been developed:
- local emulation of server side state of graphics hardware (to answer frequent requests directly without network usage)
- caching server memory at client, e.g. display lists, vertex buffers ...
- encoding/compression of graphics stream
- cross-streaming by deploying an appropriate meta graphics language/protocol (to support games for DirectX/OpenGL independent of the client‘s graphics library)
In scenarios, where graphics streaming is not an option, the visual data is streamed as H264 video. This is the case where the end device does not contain a GPU, or when the game generates commands to complex to be handled with graphics streaming, or when the clients display is small so that the amount of video data is smaller in comparison to the corresponding graphics stream. Since encoding takes place in parallel to a running game, several optimizations have been developed to reduce the computational load of this typically very demanding encoding task. This is even more crucial by considering that there might by several video streaming sessions running in parallel on a single machine. These optimizations are
- intercepting and adapting graphics commands to generate frames ideally suited for the client (viewport - screensize etc.)
- capturing additional render context information, like depth map and projection information, in order to directly compute the macroblock partitioning and motion vectors used by H264 (in contrast to the exhaustive search which is performed by generic encoders)
- continuous intersperse of Intra macroblocks contrasting the usual approach of full Intra frames, in order to reduce the peaks in the bitrates
This framework supports a fully transparent way to provide full 3D game play from nearly every kind of end device, independent of hard- and software.
workshop A. Jurgelionis, J.-P. Laulajainen, P. Fechteler, H. David, F. Bellotti, P. Eisert and A. Gloria @ N&G 2010.
Testing cross-platform streaming of video games over wired and wireless LANs, Proceedings of First International Workshop on Networking and Games, Perth, Australia, 20th - 23rd April 2010.
[Bibtex]   [PDF]  


Abstract: In this paper we present a new cross-platform approach for video game delivery in wired and wireless local networks. The developed 3D streaming and video streaming approaches enable users to access video games on set top boxes and handheld devices that natively are not capable to run PC games. During the development of the distributed gaming system we have faced a number of challenges and problems posed by the hardware and network limitations. In order to solve these problems we have developed a multilevel testing methodology which is based on user assessment and technical measures for the system under development. In this paper we focus on the technical measures and instrumentation that we use for the system\92s performance measurement and testing. The benefits of our testing methodology are demonstrated through examples from the development and testing work.
workshop A. Laikari, J.-P. Laulajainen, A. Jurgelionis, P. Fechteler and F. Bellotti @ UCMedia / PerMeD 2009
Gaming Platform for Running Games on Low-End Devices, Proceedings of ICST Conference on User Centric Media - Personalization in Media Delivery Platforms, Venice, France, 9th December 2009.
[Bibtex]   [PDF]  


Abstract: Low cost networked consumer electronics (CE) are widely used. Various applications are offered, including IPTV, VoIP, VoD, PVR and games. At the same time the requirements of computer games by means of CPU and graphics performance are continuously growing. For pervasive gaming in various environments like at home, hotels, or internet cafes, it is beneficial to run games also on mobile devices and modest performance CE devices such as set top boxes. EU IST Games@Large project is developing a new cross-platform approach for distributed 3D gaming in local networks. It introduces novel system architecture and protocols used to transfer the game graphics data across the network to end devices. Simultaneous execution of video games on a central server and a novel streaming approach of the 3D graphics output to multiple end devices enable the access of games on low cost devices that natively lack the power of executing high-quality games.
conference P. Fechteler and P. Eisert @ ICIP 2009
Depth Map Enhanced Macroblock Partitioning for H.264 Video Coding of Computer Graphics Content, Proceedings of the 16th IEEE International Conference on Image Processing, Cairo, Egypt, 7th - 11th November 2009, pp. 3441-3444.
[Bibtex]   [PDF]  


Abstract: In this paper, we present a method to speed up video encoding of GPU rendered scenes. Modern video codecs, like H.264/AVC, are based on motion compensation and support partitioning of macroblocks, e.g. 16x16, 16x8, 8x8, 8x4 etc. In general, encoders use expensive search methods to determine suitable motion vectors and compare the rate-distortion score for possible macroblock partitionings, which results in high computational encoder load. We present a method to accelerate this process for the case of streaming graphical output of unmodified commercially available 3D games which use a Skybox or Skydome rendering technique. For rendered images, usually additional information from the render context of OpenGL resp. DirectX is available which helps in the encoding process. By incorporating the depth map from the graphics board, such regions can be uniquely identified. By adapting the macroblock partitioning accordingly, the computationally expensive search methods can often be avoided. Further reduction of encoding load is achieved by additionally capturing the projection matrices during the Skybox rending and using them to directly calculate a motion vector which is usually the result of expensive search methods. In experiential results, we demonstrate the reduced computational encoder load.
summit A. Laikari, P. Fechteler, P. Eisert, A. Jurgelionis, F. Bellotti and A. Gloria @ NEM Summit 2009
Games@Large Distributed Gaming System, Proceedings of the Networked & Electronic Media Summit, Saint-Malo, France, 28th - 30th September 2009.
[Bibtex]   [PDF]  


Abstract: The requirements of computer games by means of CPU and graphics performance are continuously growing. At the same time many low cost and modest performance CE devices are gaining popularity. People are already used to mobile life style inside home and on the go and want to enjoy entertainment everywhere. This paper describes a novel gaming system, called Games@Large, which enables heavy PC game play on low cost consumer electronic devices (CE) without the need of game software modification. The key innovations of the Games@Large system are distribution of game execution, streaming of graphics, video, audio, and game control as well as network quality of service management.
journal P. Fechteler and P. Eisert @ IET Journal on Computer Vision
Adaptive Colour Classification for Structured Light Systems, IET Journal on Computer Vision: Special Issue on 3D Face Processing, Volume 3, Issue 2, p. 49-59, 3rd June 2009.
[Bibtex]   [PDF]  


Abstract: The authors present an adaptive colour classification method as well as specialised low-level image processing algorithms. With this approach the authors achieve high-quality 3D reconstructions with a single-shot structured light system without the need of dark laboratory environments. The main focus of the presented work lies in the enhancement of the robustness with respect to environment illumination, colour cross-talk, reflectance characteristics of the scanned face etc. For this purpose the colour classification is made adaptive to the characteristics of the captured image to compensate for such distortions. Further improvements are concerned with enhancing the quality of the resulting 3D models. Therefore the authors replace the typical general-purpose image preprocessing with specialised low-level algorithms performing on raw photo sensor data. The presented system is suitable for generating high-speed scans of moving objects because it relies only on one captured image. Furthermore, due to the adaptive nature of the used colour classifier, it generates high-quality 3D models even under perturbing light conditions.
journal A. Jurgelionis et al @ International Journal of Computer Games Technology
Platform for Distributed 3D Gaming, International Journal of Computer Games Technology: Special Issue on Cyber Games and Interactive Entertainment, Volume 2009, 15 pages, June 2009.
[Bibtex]   [PDF]   [WWW]  


Abstract: Video games are typically executed on Windows platforms with DirectX API and require high performance CPUs and graphics hardware. For pervasive gaming in various environments like at home, hotels, or internet cafés it is beneficial to run games also on mobile devices and modest performance CE devices avoiding the necessity of placing a noisy workstation in the living room or costly computers/consoles in each room of a hotel. This paper presents a new cross-platform approach for distributed 3D gaming in wireless local networks. We introduce the novel system architecture and protocols used to transfer the game graphics data across the network to end devices. Simultaneous execution of video games on a central server and a novel streaming approach of the 3D graphics output to multiple local end devices enable the access of games on low cost set top boxes and handheld devices that natively lack the power of executing a game with high quality graphical output.
magazin P. Fechteler @ Linux Magazin 02/09
Mit viel Profil - Räumliche Scans mit Consumer-Hardware, Linux Magazin, Ausgabe 02/09, Seite 88-91.
[Bibtex]   [PDF]  


Abstract: Nach dem Structured-Light-Prinzip lässt sich ein kostengünstiges und zuverlässiges Verfahren für räumliche Scans erzielen. Voraussetzungen sind neben einem handelsüblichen Projektor und einer Digicam ein paar Kunstgriffe aus der mathematischen Algorithmen-Kiste.
conference P. Eisert and P. Fechteler @ ICIP 2008
Low Delay Streaming of Computer Graphics, Proceedings of the 15th IEEE International Conference on Image Processing, San Diego, California, USA, 12th - 15th October 2008, pp 2704-2707.
[Bibtex]   [PDF]  


Abstract: In this paper, we present a graphics streaming system for remote gaming in a local area network. The framework aims at creating a networked game platform for home and hotel environments. A local PC based server executes a computer game and streams the graphical output to local devices in the rooms, such that the users can play everywhere in the network. Since delay is extremely crucial in interactive gaming, efficient encoding and caching of the commands is necessary. In our system we also address the round trip time problem of commands requiring feedback from the graphics board by simulating the graphics state at the server. This results in a system that enables interactive game play over the network.
workshop P. Fechteler and P. Eisert @ CVPR 2008 3DFP Workshop
Adaptive Color Classification for Structured Light Systems, Proceedings of IEEE CVPR Workshop on 3D Face Processing, Anchorage, Alaska, USA, 27th June 2008, pp. 1-7.
[Bibtex]   [PDF]  


Abstract: We present a system to capture high accuracy 3D models of faces by taking just one photo without the need of specialized hardware, just a consumer grade digital camera and beamer. The proposed 3D face scanner utilizes structured light techniques: A colored pattern is projected into the face of interest while a photo is taken. Then, the 3D geometry is calculated based on the distortions of the pattern detected in the face. This is performed by triangulating the pattern found in the captured image with the projected one.
   The main focus of our work lies in the enhancement of the systems robustness with respect to environment illumination, color cross-talk, reflectance characteristics of the scanned face etc. For this purpose the color classification of the proposed system is made adaptive to the characteristics of the captured image to compensate for such distortions. Further improvements are concerned with enhancing the quality of the resultant 3D models. Therefore we replace the typical general-purpose image preprocessing with specialized low-level algorithms performing on raw CCD sensor data.
   The presented system is suitable for generating high speed scans of moving objects because it relies only on one captured image. Furthermore, due to the adaptive nature of the used color classifier, it generates high quality 3D models even under perturbing light conditions.
conference P. Eisert, P. Fechteler and J. Rurainsky @ CVPR 2008
3-D Tracking of Shoes for Virtual Mirror Applications, Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, USA, 24th - 26th June 2008, pp. 1-6.
[Bibtex]   [PDF]  


Abstract: In this paper, augmented reality techniques are used in order to create a Virtual Mirror for the real-time visualization of customized sports shoes. Similar to looking into a mirror when trying on new shoes in a shop, we create the same impression but for virtual shoes that the customer can design individually. For that purpose, we replace the real mirror by a large display that shows the mirrored input of a camera capturing the legs and shoes of a person. 3-D Tracking of both feet and exchanging the real shoes by computer graphics models gives the impression of actually wearing the virtual shoes. The 3-D motion tracker presented in this paper, exploits mainly silhouette information to achieve robust estimates for both shoes from a single camera view. The use of a hierarchical approach in an image pyramid enables real-time estimation at frame rates of more than 30 frames per second.
symposium I. Nave, H. David, A. Laikari, P. Eisert and P. Fechteler @ ISCE 2008
Games@Large Graphics Streaming Architecture, Proceedings of the 12th International Symposium on Consumer Electronics, Algarve, Portugal, 14th - 16th April 2008.
[Bibtex]   [PDF]  


Abstract: In coming years we will see low cost networked consumer electronics (CE) devices dominating the living room. Various applications will be offered, including IPTV, VoIP, VoD, PVR and others. With regards to gaming, the need to compete with PlayStation and Xbox will require a radical change in system architecture. While traditional CE equipment suffers from having to meet low BOM (bill of materials) targets, dictated by highly competitive market and cable companies targeted costs, consoles enjoy superior hardware and software capabilities, being able to offset hardware and BOM costs with software royalties. Exent Technologies is leading the European FP6 Integrated Project Games@Large , whose mission is to research, develop and implement a new platform aimed at providing users with a richer variety of entertainment experience in familiar environments, such as their house, hotel room, and Internet Café. This will support low-cost, ubiquitous gameplay throughout such environments, while taking advantage of existing hardware and providing multiple members of the family and community the ability to play simultaneously and to share experiences.
   This paper focuses on one of the innovative aspects of the Games@Large project idea – the interactive streaming of graphical output to client devices. This is achieved by capturing the graphical commands at the DirectX API on the server and rendering them locally, resulting in high visual quality and enabling multiple game execution. In order to support also small handheld devices which lack hardware graphics support, an enhanced video method is additionally provided.
conference P. Fechteler, P. Eisert and J. Rurainsky @ ICIP 2007
Fast and High Resolution 3D Face Scanning, Proceedings of the 14th IEEE International Conference on Image Processing 2007, San Antonio, Texas, USA, September 2007.
[Bibtex]   [PDF]  


Abstract: In this work, we present a framework to capture 3D models of faces in high resolutions with low computational load. The system captures only two pictures of the face, one illuminated with a colored stripe pattern and one with regular white light. The former is needed for the depth calculation, the latter is used as texture. Having these two images a combination of specialized algorithms is applied to generate a 3D model. The results are shown in different views: simple surface, wire grid respective polygon mesh or textured 3D surface.
conference P. Eisert, P. Fechteler and J. Rurainsky @ ICIP 2007
Virtual Mirror: Real-Time Tracking of Shoes in Augmented Reality Environments, Proceedings of the 14th IEEE International Conference on Image Processing, San Antonio, Texas, USA, September 2007.
[Bibtex]   [PDF]  


Abstract: In this paper, we present a system that enhances the visualization of customized sports shoes using augmented reality techniques. Instead of viewing yourself in a real mirror, sophisticated 3D image processing techniques are used to verify the appearance of new shoe models. A single camera captures the person and outputs the mirrored images onto a large display which replaces the real mirror. The 3D motion of both feet are tracked in real-time with a new motion tracking algorithm. Computer graphics models of the shoes are augmented into the video such that the person seems to wear the virtual shoes.
conference P. Eisert and P. Fechteler @ SIGMAP 2007
Remote Rendering of Computer Games, Proceedings of International Conference on Signal Processing and Multimedia Applications, Barcelona, Spain, July 2007.
[Bibtex]   [PDF]  


Abstract: In this paper, we present two techniques for streaming the output of computer games to an end device for remote gaming in a local area network. We exploit the streaming methods in the European project Games@Large, which aims at creating a networked game platform for home and hotel environments. A local PC based server executes a computer game and streams the graphical and audio output to local devices in the rooms, such that the users can play everywhere in the network. Dependent on the target resolution of the end device, different types of streaming are addressed. For small displays, the graphical output is captured and encoded as a video stream. For high resolution devices, the graphics commands of the game are captured, encoded, and streamed to the client. Since games require significant feedback from the user, special care has to be taken to achieve these constraints for very low delays.
thesis Philipp Fechteler
Dynamic Load Balancing in Heterogenous Cluster Systems, Diploma Thesis, TU Berlin, Technical University of Berlin, Institute for Telecomunicationsystems - research group Communication and Operating Systems, January 2005.
[Bibtex]   [PDF]   [WWW]