Epstein Files Full PDF

CLICK HERE
Technopedia Center
PMB University Brochure
Faculty of Engineering and Computer Science
S1 Informatics S1 Information Systems S1 Information Technology S1 Computer Engineering S1 Electrical Engineering S1 Civil Engineering

faculty of Economics and Business
S1 Management S1 Accountancy

Faculty of Letters and Educational Sciences
S1 English literature S1 English language education S1 Mathematics education S1 Sports Education
teknopedia

  • Registerasi
  • Brosur UTI
  • Kip Scholarship Information
  • Performance
Flag Counter
  1. World Encyclopedia
  2. Facial motion capture - Wikipedia
Facial motion capture - Wikipedia
From Wikipedia, the free encyclopedia
(Redirected from Facial capture)
Computer vision technology

Facial motion capture is the process of electronically converting the movements of a person's face into a digital database using cameras or laser scanners. This database may then be used to produce computer graphics (CG), computer animation for movies, games, or real-time avatars. Because the motion of CGI characters is derived from the movements of real people, it results in a more realistic and nuanced computer character animation than if the animation were created manually.

A facial motion capture database describes the coordinates or relative positions of reference points on the actor's face. The capture may be in two dimensions, in which case the capture process is sometimes called "expression tracking", or in three dimensions. Two-dimensional capture can be achieved using a single camera and capture software. This produces less sophisticated tracking, and is unable to fully capture three-dimensional motions such as head rotation. Three-dimensional capture is accomplished using multi-camera rigs or laser marker system. Such systems are typically far more expensive, complicated, and time-consuming to use. Two predominant technologies exist: marker and markerless tracking systems.

Facial motion capture is related to body motion capture, but is more challenging due to the higher resolution requirements to detect and track subtle expressions possible from small movements of the eyes and lips. These movements are often less than a few millimeters, requiring even greater resolution and fidelity and different filtering techniques than usually used in full body capture. The additional constraints of the face also allow more opportunities for using models and rules.

Facial expression capture is similar to facial motion capture. It is a process of using visual or mechanical means to manipulate computer-generated characters with input from human faces, or to recognize emotions from a user.

History

[edit]
This section needs to be updated. Please help update this article to reflect recent events or newly available information. (September 2019)

One of the first papers discussing performance-driven animation was published by Lance Williams in 1990. There, he describes 'a means of acquiring the expressions of realfaces, and applying them to computer-generated faces'.[1]

Technologies

[edit]

Marker-based

[edit]

Traditional marker-based systems apply up to 350 markers to the actors face and track the marker movement with high resolution cameras. This has been used on movies such as The Polar Express and Beowulf to allow an actor such as Tom Hanks to drive the facial expressions of several different characters. Unfortunately, this is relatively cumbersome and makes the actors expressions overly driven once the smoothing and filtering have taken place. Next generation systems such as CaptiveMotion utilize offshoots of the traditional marker-based system with higher levels of details.

Active LED Marker technology is currently being used to drive facial animation in real-time to provide user feedback.

Markerless

[edit]

Markerless technologies use the features of the face such as nostrils, the corners of the lips and eyes, and wrinkles and then track them. This technology is discussed and demonstrated at CMU,[2] IBM,[3] University of Manchester (where much of this started with Tim Cootes,[4] Gareth Edwards and Chris Taylor) and other locations, using active appearance models, principal component analysis, eigen tracking, deformable surface models and other techniques to track the desired facial features from frame to frame. This technology is much less cumbersome, and allows greater expression for the actor.

These vision-based approaches also have the ability to track pupil movement, eyelids, teeth occlusion by the lips and tongue, which are obvious problems in most computer-animated features. Typical limitations of vision-based approaches are resolution and frame rate, both of which are decreasing as issues as high speed, high resolution CMOS cameras become available from multiple sources.

The technology for markerless face tracking is related to that in a Facial recognition system, since a facial recognition system can Potentially be applied sequentially to each frame of video, resulting in face tracking. For example, the Neven Vision system[5] (formerly Eyematics, now acquired by Google) allowed real-time 2D face tracking with no person-specific training; their system was also amongst the best-performing facial recognition systems in the U.S. Government's 2002 Facial Recognition Vendor Test (FRVT). On the other hand, some recognition systems do not explicitly track expressions or even fail on non-neutral expressions, and so are not suitable for tracking. Conversely, systems such as deformable surface models pool temporal information to disambiguate and obtain more robust results, and thus could not be applied from a single photograph.

Markerless face tracking has progressed to commercial systems such as Image Metrics, which has been applied in movies such as The Matrix sequels[6] and The Curious Case of Benjamin Button. The latter used the Mova system to capture a deformable facial model, which was then animated with a combination of manual and vision tracking.[7] Avatar was another prominent motion capture movie; however, it used painted markers rather than being markerless. Dynamixyz[permanent dead link] is another commercial system currently in use.

Markerless systems can be classified according to several distinguishing criteria:

  • 2-D versus 3-D tracking
  • whether person-specific training or other human assistance is required
  • real-time performance (which is only possible if no training or supervision is required)
  • whether they need an additional source of information such as projected patterns or invisible paint such as used in the Mova system.

To date, no system is ideal with respect to all these criteria. For example, the Neven Vision system was fully automatic and required no hidden patterns or per-person training, but was 2D. The Face/Off system[8] is 3D, automatic, and real-time but requires projected patterns.

Facial expression capture

[edit]

Technology

[edit]

Digital video-based methods are becoming increasingly preferred, as mechanical systems tend to be cumbersome and difficult to use.

Using digital cameras, the input user's expressions are processed to provide the head pose, which allows the software to then find the eyes, nose and mouth. The face is initially calibrated using a neutral expression. Then depending on the architecture, the eyebrows, eyelids, cheeks, and mouth can be processed as differences from the neutral expression. This is done by looking for the edges of the lips for instance and recognizing it as a unique object. Often contrast enhancing makeup or markers are worn, or some other method to make the processing faster. Like voice recognition, the best techniques are only good 90 percent of the time, requiring a great deal of tweaking by hand, or tolerance for errors.

Since computer-generated characters don't actually have muscles, different techniques are used to achieve the same results. Some animators create bones or objects that are controlled by the capture software, and move them accordingly, which when the character is rigged correctly gives a good approximation. Since faces are very elastic this technique is often mixed with others, adjusting the weights differently for the skin elasticity and other factors depending on the desired expressions.

Usage

[edit]

Several commercial companies are developing products that have been used, but are rather expensive.[citation needed]

It is expected that this will become a major input device for computer games once the software is available in an affordable format, but the hardware and software do not yet exist, despite the research for the last 15 years producing results that are almost usable.[citation needed]

Communication with real-time avatars

[edit]

The first application that got wide adoption is communication. Initially, video telephony and multimedia messaging, and later in 3D with mixed reality headsets.

With the advance of machine learning, computing power and advanced sensors, especially on mobile phones, facial motion capture technology became widely available. Two notable examples are Snapchat's lens feature and Apple's Memoji[9] that can be used to record messages with avatars or live via the FaceTime app. With these applications (and many other) most modern mobile phones today are capable of performing real-time facial motion capture! More recently, real-time facial motion capture, combined with realistic 3-D avatars were introduced to enable immersive communication in mixed reality (MR) and virtual reality (VR). Meta demonstrated their Codec Avatars to communicate via their MR headset Meta Quest Pro to record a podcast with two remote participants.[10] Apple's MR headset Apple Vision Pro also supports real-time facial motion capture that can be used with applications such as FaceTime. Real-time communication applications prioritize low latency to facilitate natural conversation and ease of use, aiming to make the technology accessible to a broad audience. These considerations may limit on the possible accuracy of the motion capture.

See also

[edit]
  • Eye tracking
  • Computer facial animation
  • Deepfake
  • Facial recognition system
  • Facial Action Coding System
  • Uncanny valley

References

[edit]
  1. ^ Performance-Driven Facial Animation, Lance Williams, Computer Graphics, Volume 24, Number 4, August 1990
  2. ^ AAM Fitting Algorithms Archived 2017-02-22 at the Wayback Machine from the Carnegie Mellon Robotics Institute
  3. ^ "Real World Real-time Automatic Recognition of Facial Expressions" (PDF). Archived from the original (PDF) on 2015-11-19. Retrieved 2015-11-17.
  4. ^ Modelling and Search Software Archived 2009-02-23 at the Wayback Machine ("This document describes how to build, display and use statistical appearance models.")
  5. ^ Wiskott, Laurenz; Fellous, J.-M.; Kruger, N.; von der Malsurg, C. (1997), "Face recognition by elastic bunch graph matching", Computer Analysis of Images and Patterns, Lecture Notes in Computer Science, vol. 1296, Springer, pp. 456–463, CiteSeerX 10.1.1.18.1256, doi:10.1007/3-540-63460-6_150, ISBN 978-3-540-63460-7
  6. ^ Borshukov, George; Piponi, Dan; Larsen, Oystein; Lewis, J. P.; Tempelaar-Lietz, Christina (2005). "Universal capture - image-based facial animation for "The Matrix Reloaded"". ACM SIGGRAPH 2005 Courses on - SIGGRAPH '05. p. 16. doi:10.1145/1198555.1198596.
  7. ^ Barba, Eric; Preeg, Steve (18 March 2009), "The Curious Face of Benjamin Button", Presentation at Vancouver ACM SIGGRAPH Chapter, 18 March 2009.
  8. ^ Weise, Thibaut; Li, Hao; Van Gool, Luc; Pauly, Mark (2009). "Face/Off: Live facial puppetry". Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. pp. 7–16. doi:10.1145/1599470.1599472. ISBN 978-1-60558-610-6.
  9. ^ "Use Memoji on your iPhone or iPad Pro". support.apple.com. Retrieved October 16, 2024..
  10. ^ "#398 – Mark Zuckerberg: First Interview in the Metaverse". lexfriedman.com. 28 September 2023. Retrieved October 16, 2024.

External links

[edit]
  • Carnegie Mellon University
  • Delft University of Technology
  • Intel
  • Sheffield and Otago
  • v
  • t
  • e
Animation topics
By country
  • Azerbaijan
  • Bangladesh
  • Bhutan
  • Brazil
  • Bulgaria
  • Canada
  • China
    • history
  • Czechia
  • Estonia
  • France
  • Hungary
  • India
    • history
  • Indonesia
  • Iran
  • Italy
  • Japan
    • history
  • Korea
    • North
    • South
    • history
  • Latvia
  • Lithuania
  • Malaysia
  • Mexico
  • North Korea
  • Philippines
  • Portugal
  • Romania
  • Russia
  • South Africa
  • Spain
  • Taiwan
  • Thailand
  • Ukraine
  • United Kingdom
  • United States
    • Silent Era
    • The Golden Age
    • World War II
    • Early TV broadcast era
    • Modern TV cable and streaming era
  • Vietnam
    • Timeline in animation
Industry
  • Animator
    • List
  • Animation department
  • Animation director
  • Story artist
  • Animation studios
    • List
  • Animation database
  • Art pipeline
  • Biologist simulators
  • Animation film festivals
    • international
    • regional
  • Highest-grossing films (Opening weekends)
  • Outsourcing
  • International Animation Day
Works
  • Films
    • Computer-animated
    • Feature-length
    • Lost or unfinished
    • Package
    • Short
    • Short series
    • Stop-motion
    • Adult animated films
    • Children's animated films
  • Series
    • Adult animated
    • Children's animated
    • Computer-animated
    • Direct-to-video
    • Flash
    • Internet
    • Television
  • (list of years in animation  · years in animation)
Techniques
Traditional
  • Barrier-grid and stereography
  • Flip book
  • Limited animation
  • Masking
  • Rotoscoping
  • Exposure sheet
Stop motion
  • Claymation
    • clay painting, strata-cut
  • Cutout (silhouette)
  • Graphic
  • Model
    • go motion
  • Object
    • Brickfilm
  • Pixilation
  • Puppetoons
Computer
2D
  • 2.5D
  • Flash
  • PowerPoint
  • SVG
  • CSS
  • Multi-sketch
  • Onion skinning
3D
  • T-pose
  • Cel shading
  • CGI
  • Crowd
  • Facial animation
  • Morph target
  • Motion capture
    • facial
    • hand tracking
    • eye tracking
  • Non-photorealistic rendering
  • Physically based animation
  • Procedural
  • Skeletal
  • Virtual cinematography
Puppetry
  • Traditional puppetry
  • Digital puppetry
    • Machinima
    • Aniforms
    • Virtual human
    • Live2D
  • Supermarionation
Mechanical
  • Animatronics
    • Audio-Animatronics
    • Linear Animation Generator
  • Direct manipulation animation
  • Humanoid animation
  • Idle animation
  • Ink-wash animation
  • Magic Lantern
  • Scanimate
  • Shadowmation
  • Squigglevision
  • Whiteboard animation
Other methods
  • Blocking
  • Character animation
    • model sheet
    • walk cycle
    • lip sync
    • off-model
  • Chuckimation
  • Drawn-on-film
  • Erasure animation
  • Hydrotechnics
  • Inbetweening
  • Morphing
  • Paint-on-glass
  • Pinscreen
  • Pixel art
  • Pose to pose
  • Straight ahead
  • Rubber hose
  • Special effects
  • Sand
  • Syncro-Vox
  • Zoetrope
Variants
  • Abstract animation (visual music)
  • Adult animation
  • Animated cartoon
  • Animated sitcom
  • Animated documentary
  • Educational animation
  • Erotic animation
  • Independent animation
  • Instructional animation
  • Virtual newscaster
History
  • Early history
  • History of computer animation
    • timeline
  • (List of years in animation  ·  : years in animation)
Related topics
  • Animation music
    • Bouncing ball
    • Mickey Mousing
  • Key frame
  • Cel
  • Character animation
    • model sheet
    • walk cycle
    • lip sync
    • off-model
  • Creature animation
  • Twelve basic principles
  • Motion comic
  • Films with live action and animation
    • highest grossing
  • Cartoon
    • physics
    • violence
  • Most expensive animated films
  • List of animated films by box office admissions
  • List of animated television series by episode count
  • Category
  • Portal
  • Outline
  • v
  • t
  • e
Extended reality (XR)
  • From least to most virtual: Augmented reality (AR)
  • Mixed reality (MR)
  • Virtual reality (VR)
    • Virtuality
Concepts
Main
  • Cinematic virtual reality
  • Computer-mediated reality
  • Immersion
  • Metaverse
    • Avatar
  • Projection augmented model
  • "Room-scale"
  • Six degrees of freedom (6DoF)
  • Spatial computing
  • Telepresence
  • Virtual reality applications
  • Virtual reality sickness
  • Virtual world
Other
  • Camera filter
  • Quantified self
  • Screen-door effect
  • Simulation hypothesis
  • Transhumanism
  • Vergence-accommodation conflict
  • VTuber
Technologies
Display
  • EyeTap
  • Head-mounted display
    • optical
  • Head-up display
  • Pancake lens
  • Smartglasses
  • Virtual reality headset
    • list
  • Virtual retinal display
3D interaction
  • Brain–computer interface
  • Eye tracking
  • Facial motion capture
  • Finger/hand tracking
  • Pose tracking
  • Simultaneous localization and mapping
Software
  • Asynchronous reprojection
  • Foveated rendering
  • Image-based modeling and rendering
  • Spatial audio
Photography
  • 360-degree video
  • Free viewpoint television
  • Omnidirectional camera
  • VR photography
Other
  • Haptic suit
  • Omnidirectional treadmill
  • Wearable computer
Peripherals
  • Cyberith Virtualizer
  • Leap Motion
  • Oculus Touch
  • PlayStation Move
  • Razer Hydra
  • Virtuix Omni
  • Wired glove
  • Wizdish ROVR
Companies
  • Apple Inc.
  • Brilliant Labs
  • ByteDance
  • Collabora
  • Google
  • HTC
  • Khronos Group
  • Liquid Image
  • Magic Leap
  • Meta Platforms
    • Reality Labs
  • Microsoft
  • Niantic, Inc.
  • Niantic Spatial
  • Pimax
  • Rokoko
  • Samsung Electronics
  • Valve Corporation
  • Varjo
  • Vuzix
Devices
Current
  • Apple Vision Pro
  • Bigscreen Beyond
  • Golden-i headsets
  • HTC Vive
  • Magic Leap
  • Meta Quest 3
    • 3S
  • Open Source Virtual Reality
  • PICO 4 Ultra
  • Pimax
  • PlayStation VR2
  • Samsung Galaxy XR
  • Vuzix
Former
  • AntVR
  • castAR
  • EyePhone
  • Google Cardboard
  • Google Daydream
  • Google Glass
  • Meta Quest Pro
  • Microsoft HoloLens
    • 2
  • Oculus Go
  • Oculus Quest
  • Oculus/Meta Quest 2
  • Oculus Rift
    • CV1
    • S
  • PICO 4
  • PlayStation VR
  • Samsung Gear VR
  • Sensorama
  • SixthSense
  • The Sword of Damocles
  • VFX1 Headgear
  • Virtual Boy
  • Virtual fixture
  • Virtuality
  • VR-1
  • Valve Index
Unreleased
  • Project Iris
  • Sega VR
Upcoming
  • Steam Frame
Software
General
  • Interactive art
    • Virtual graffiti
  • Metaverse
  • Pervasive game
  • Software related to augmented reality
  • Virtual reality game
Operating systems and
desktop environments
  • Android XR
  • Meta Horizon OS
    • version history
  • visionOS
  • Windows Mixed Reality
  • SteamOS
Development tools and
game engines
  • A-Frame
  • ARCore
  • ARKit
  • ARToolKit
  • Godot Engine
  • Jetpack Compose XR
  • Meta Spatial SDK
  • OpenVR
  • OpenXR
  • RealityKit
  • S&box
  • Source 2
  • TabletopKit
  • Unity
  • Universal Scene Description
  • Unreal Engine
  • Vuforia Augmented Reality SDK
  • WebXR
Games
  • List of HTC Vive games
  • List of Meta Quest games
  • List of Oculus Rift games
  • List of PlayStation VR games
    • VR2
Communities and
social networks
  • AltspaceVR
  • FaceTime for visionOS
  • Horizon Worlds
  • NeosVR
  • Rec Room
  • Resonite
  • Sansar
  • Sensorium
  • Sinespace
  • VRChat
  • VTime XR
  • Portals
    • Technology
  • Category
    • Augmented reality
    • Mixed reality
    • Virtual reality
    • Metaverse
    • Spatial computing
  • Commons
    • Augmented reality
    • Mixed reality
    • Virtual reality
    • Metaverse
Retrieved from "https://teknopedia.ac.id/w/index.php?title=Facial_motion_capture&oldid=1335975491"
Categories:
  • Computer animation
  • Facial expressions
  • Computing input devices
  • Motion capture
  • Virtual reality
Hidden categories:
  • Webarchive template wayback links
  • Articles with short description
  • Short description matches Wikidata
  • Wikipedia articles in need of updating from September 2019
  • All Wikipedia articles in need of updating
  • All articles with dead external links
  • Articles with dead external links from September 2017
  • Articles with permanently dead external links
  • All articles with unsourced statements
  • Articles with unsourced statements from October 2024

  • indonesia
  • Polski
  • العربية
  • Deutsch
  • English
  • Español
  • Français
  • Italiano
  • مصرى
  • Nederlands
  • 日本語
  • Português
  • Sinugboanong Binisaya
  • Svenska
  • Українська
  • Tiếng Việt
  • Winaray
  • 中文
  • Русский
Sunting pranala
url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url
Pusat Layanan

UNIVERSITAS TEKNOKRAT INDONESIA | ASEAN's Best Private University
Jl. ZA. Pagar Alam No.9 -11, Labuhan Ratu, Kec. Kedaton, Kota Bandar Lampung, Lampung 35132
Phone: (0721) 702022
Email: pmb@teknokrat.ac.id