- Unity User Manual (5.6)
- Graphics Reference
- Cameras Reference
Switch to Scripting
Cameras are the devices that capture and display the world to the player. By customizing and manipulating cameras, you can make the presentation of your game truly unique. You can have an unlimited number of cameras in a scene. They can be set to render in any order, at any place on the screen, or only certain parts of the screen.
|Clear Flags||Determines which parts of the screen will be cleared. This is handy when using multiple Cameras to draw different game elements.|
|Background||The color applied to the remaining screen after all elements in view have been drawn and there is no skybox.|
|Culling Mask||Includes or omits layers of objects to be rendered by the Camera. Assigns layers to your objects in the Inspector.|
|Projection||Toggles the camera’s capability to simulate perspective.|
|Perspective||Camera will render objects with perspective intact.|
|Orthographic||Camera will render objects uniformly, with no sense of perspective. NOTE: Deferred rendering is not supported in Orthographic mode. Forward rendering is always used.|
|Size (when Orthographic is selected)||The viewport size of the Camera when set to Orthographic.|
|Field of view (when Perspective is selected)||The width of the Camera’s view angle, measured in degrees along the local Y axis.|
|Clipping Planes||Distances from the camera to start and stop rendering.|
|Near||The closest point relative to the camera that drawing will occur.|
|Far||The furthest point relative to the camera that drawing will occur.|
|Viewport Rect||Four values that indicate where on the screen this camera view will be drawn. Measured in Viewport Coordinates (values 0–1).|
|X||The beginning horizontal position that the camera view will be drawn.|
|Y||The beginning vertical position that the camera view will be drawn.|
|W (Width)||Width of the camera output on the screen.|
|H (Height)||Height of the camera output on the screen.|
|Depth||The camera’s position in the draw order. Cameras with a larger value will be drawn on top of cameras with a smaller value.|
|Rendering Path||Options for defining what rendering methods will be used by the camera.|
|Use Player Settings||This camera will use whichever Rendering Path is set in the Player Settings.|
|Vertex Lit||All objects rendered by this camera will be rendered as Vertex-Lit objects.|
|Forward||All objects will be rendered with one pass per material.|
|Deferred Lighting||All objects will be drawn once without lighting, then lighting of all objects will be rendered together at the end of the render queue. NOTE: If the camera’s projection mode is set to Orthographic, this value is overridden, and the camera will always use Forward rendering.|
|Target Texture||Reference to a Render Texture that will contain the output of the Camera view. Setting this reference will disable this Camera’s capability to render to the screen.|
|HDR||Enables High Dynamic Range rendering for this camera.|
|Target Display||Defines which external device to render to. Between 1 and 8.|
Cameras are essential for displaying your game to the player. They can be customized, scripted, or parented to achieve just about any kind of effect imaginable. For a puzzle game, you might keep the Camera static for a full view of the puzzle. For a first-person shooter, you would parent the Camera to the player character, and place it at the character’s eye level. For a racing game, you’d probably have the Camera follow your player’s vehicle.
You can create multiple Cameras and assign each one to a different Depth. Cameras are drawn from low Depth to high Depth. In other words, a Camera with a Depth of 2 will be drawn on top of a Camera with a depth of 1. You can adjust the values of the Normalized View Port Rectangle property to resize and position the Camera’s view onscreen. This can create multiple mini-views like missile cams, map views, rear-view mirrors, etc.
Unity supports different rendering paths. You should choose which one you use depending on your game content and target platform / hardware. Different rendering paths have different features and performance characteristics that mostly affect lights and shadows.The rendering path used by your project is chosen in Player Settings. Additionally, you can override it for each Camera.
For more information on rendering paths, check the rendering paths page.
Each Camera stores color and depth information when it renders its view. The portions of the screen that are not drawn in are empty, and will display the skybox by default. When you are using multiple Cameras, each one stores its own color and depth information in buffers, accumulating more data as each Camera renders. As any particular Camera in your scene renders its view, you can set the Clear Flags to clear different collections of the buffer information. To do this, choose one of the following four options:
This is the default setting. Any empty portions of the screen will display the current Camera’s skybox. If the current Camera has no skybox set, it will default to the skybox chosen in the Lighting Window (menu: Window > Lighting). It will then fall back to the Background Color. Alternatively a Skybox component can be added to the camera. If you want to create a new Skybox, you can use this guide.
Any empty portions of the screen will display the current Camera’s Background Color.
If you want to draw a player’s gun without letting it get clipped inside the environment, set one Camera at Depth 0 to draw the environment, and another Camera at Depth 1 to draw the weapon alone. Set the weapon Camera’s Clear Flags to depth only. This will keep the graphical display of the environment on the screen, but discard all information about where each object exists in 3-D space. When the gun is drawn, the opaque parts will completely cover anything drawn, regardless of how close the gun is to the wall.
This mode does not clear either the color or the depth buffer. The result is that each frame is drawn over the next, resulting in a smear-looking effect. This isn’t typically used in games, and would more likely be used with a custom shader.
Note that on some GPUs (mostly mobile GPUs), not clearing the screen might result in the contents of it being undefined in the next frame. On some systems, the screen may contain the previous frame image, a solid black screen, or random colored pixels.
The Near and Far Clip Plane properties determine where the Camera’s view begins and ends. The planes are laid out perpendicular to the Camera’s direction and are measured from its position. The Near plane is the closest location that will be rendered, and the Far plane is the furthest.
The clipping planes also determine how depth buffer precision is distributed over the scene. In general, to get better precision you should move the Near plane as far as possible.
Note that the near and far clip planes together with the planes defined by the field of view of the camera describe what is popularly known as the camera frustum. Unity ensures that when rendering your objects those which are completely outside of this frustum are not displayed. This is called Frustum Culling. Frustum Culling happens irrespective of whether you use Occlusion Culling in your game.
For performance reasons, you might want to cull small objects earlier. For example, small rocks and debris could be made invisible at much smaller distance than large buildings. To do that, put small objects into a separate layer and set up per-layer cull distances using Camera.layerCullDistances script function.
The Culling Mask is used for selectively rendering groups of objects using Layers. More information on using layers can be found here.
Normalized Viewport Rectangles
Normalized Viewport Rectangle is specifically for defining a certain portion of the screen that the current camera view will be drawn upon. You can put a map view in the lower-right hand corner of the screen, or a missile-tip view in the upper-left corner. With a bit of design work, you can use Viewport Rectangle to create some unique behaviors.
It’s easy to create a two-player split screen effect using Normalized Viewport Rectangle. After you have created your two cameras, change both camera’s H values to be 0.5 then set player one’s Y value to 0.5, and player two’s Y value to 0. This will make player one’s camera display from halfway up the screen to the top, and player two’s camera start at the bottom and stop halfway up the screen.
Marking a Camera as Orthographic removes all perspective from the Camera’s view. This is mostly useful for making isometric or 2D games.
Note that fog is rendered uniformly in orthographic camera mode and may therefore not appear as expected. This is because the Z coordinate of the post-perspective space is used for the fog “depth”. This is not strictly accurate for an orthographic camera but it is used for its performance benefits during rendering.
This will place the camera’s view onto a Texture that can then be applied to another object. This makes it easy to create sports arena video monitors, surveillance cameras, reflections etc.
A camera has up to 8 target display settings. The camera can be controlled to render to one of up to 8 monitors. This is supported only on PC, Mac and Linux. In Game View the chosen display in the Camera Inspector will be shown.
- Cameras can be instantiated, parented, and scripted just like any other GameObject.
- To increase the sense of speed in a racing game, use a high Field of View.
- Cameras can be used in physics simulation if you add a Rigidbody Component.
- There is no limit to the number of Cameras you can have in your scenes.
- Orthographic cameras are great for making 3D user interfaces.
- If you are experiencing depth artifacts (surfaces close to each other flickering), try setting Near Plane to as large as possible.
- Cameras cannot render to the Game Screen and a Render Texture at the same time, only one or the other.
- There’s an option of rendering a Camera’s view to a texture, called Render-to-Texture, for even more interesting effects.
- Unity comes with pre-installed Camera scripts, found in Components > Camera Control. Experiment with them to get a taste of what’s possible.
- In the Unity menu, choose Cinemachine > Create Virtual Camera. ...
- Use the Follow property to specify a GameObject to follow. ...
- Use the Look At property to specify the GameObject that the Virtual Camera should aim at. ...
- Customize the Virtual Camera as needed.
- In the Unity Editor, go to the player settings by navigating to the "Edit > Project Settings > Player" page.
- Select the "Windows Store" tab.
- In the "Publishing Settings > Capabilities" section, check the WebCam and Microphone capabilities.
Cinemachine speeds up game development. It frees your team from expensive camera-logic development and allows you to iterate and prototype new ideas on the fly while saving settings in play mode.What is Cinemachine in Unity? ›
Cinemachine is a suite of modules for operating the Unity camera. Cinemachine solves the complex mathematics and logic of tracking targets, composing, blending, and cutting between shots.How do I add a camera script in Unity? ›
First, locate and select “Main Camera” in the Hierarchy tab on the left hand side of the Editor. Drag your script from the Projects tab and drop it on the Main Camera GameObject. In the Inspector window, with Main Camera selected, you should see that the script was assigned to the Main Camera as a Component.Why is there no camera rendering in Unity? ›
The camera needs to be enabled if there is one present. If there is none, then you need to create one from GameObjects -> Camera. Save this answer.What is the main camera in Unity? ›
The first enabled Camera component that is tagged "MainCamera" (Read Only). If there is no enabled Camera component with the "MainCamera" tag, this property is null. Internally, Unity caches all GameObjects with the "MainCamera" tag. When you access this property, Unity returns the first valid result from its cache.How do I view an object in unity? ›
Scene View Navigation
Hold the right mouse button to enter Flythrough mode. This turns your mouse and WASD keys (plus Q and E for up and down) into quick first-person view navigation. Select any GameObject and press the F key. This will center the Scene View and pivot point on the selection.
- Applying a box collider/rigidbody/material to the camera.
- Making an empty gameobject parent of camera and applying box collider/rigidbody/material to that gameobject.
This is why I suggest using the rigidbody method for your 2D characters. Unlike the character controller, the rigidbody will handle all the physics calculations for you such as gravity and momentum. With this component, your character can interact with other rigidbodies in the game.How do you make a moving player in Unity? ›
Click on the Player object and, in the Inspector view, scroll down to Add Component. Add a Rigidbody, and then add another component as a Capsule Collider this time. You'll need these components to add physics, and therefore movement, to your Player.How to make a 360 camera in Unity? ›
- Set Source as 360 View.
- Set Camera as MainCamera.
- Set Output Dimensions Width = 4096 & Height = 2048.
- Set Cube Map Size Width = 2048.
- Deselect Record In Stereo.
- Include or exclude audio (personal preference. ...
- Set Output File > File Name to “Apartment 360 Video”
It is certainly possible to have multiple cameras at once. There are a couple things to know that will help with this: - Cameras have a "depth" property. This determines which order the cameras will draw.What would a second person game be like? ›
So what's a 2nd-person game? In a 2nd-person game, you would see your character through the perspective of another character interacting with them.Does Unity only use C sharp? ›
The language that's used in Unity is called C# (pronounced C-sharp). All the languages that Unity operates with are object-oriented scripting languages.Is Cinemachine worth it? ›
Cinemachine is nothing short of amazing and a fantastic tool to speed up the development of your game. If you're not using it, you should be. Even if it doesn't provide the perfect solution that ships with your project it provides a great starting point for quick prototyping.Why is C# best for Unity? ›
Unity favored C# due to its simplicity, combined with a JIT (just-in-time) compiler that translates your C# into relatively efficient native code. The remaining and much larger parts of the Unity engine have been developed using C++ in order to provide well-balanced and controlled performance.Does Unity need CPU or GPU? ›
The processor (or CPU) is one of the most important pieces of a Unity development workstation. While many other parts of the system impact performance to some degree, the CPU is the core piece of hardware that is a part of absolutely anything and everything you do.Is Unity using my GPU? ›
While Unity by itself doesn't really use GPUs all that much. Your GPU usage mostly just depends on the type of content that you're making. If you work with high-quality 3D visualizations, animations, large textures, complex shaders, or need high FPS for VR, etc., having a good GPU is pretty much mandatory.
No. You don't need a dedicated graphics card to make games but the more complex the game you make the more demanding it will be on your hardware and you may eventually find it necessary to have one. MadeFromPolygons and Ony like this.Does Unity have motion capture? ›
Cinema Mocap is the first and only Markerless Motion Capture tool for the Unity game engine. Perfect for game developers and digital movie creators to quickly and easily add motion to their project.What is Slerp in Unity? ›
Description. Spherically interpolates between two vectors. Interpolates between a and b by amount t . The difference between this and linear interpolation (aka, "lerp") is that the vectors are treated as directions rather than points in space.What are quaternions Unity? ›
Quaternions. Quaternions can be used to represent the orientation or rotation of a GameObject. This representation internally consists of four numbers (referenced in Unity as x, y, z & w) however these numbers don't represent angles or axes and you never normally need to access them directly.Is Unity good for rendering? ›
Advanced, Lighting, Shading, and Textures
These features are what elevates a “good” 3D render into a work of art. Lighting and shadows are essential for creating realism and showing depth in 3D renders. Unity allows the creation of natural and man-made light sources. These light sources create realistic shadows.
Camera is a component, and all components have gameObject and transform properties (documented here).What models does Unity take? ›
- Exported 3D file formats, such as . fbx or . obj. ...
- Proprietary 3D or DCC (Digital Content Creation) application files, such as . max and . blend file formats from Autodesk® 3ds Max® or Blender, for example.
Now, from the Unity3D manual under the heading: “Multi-Display”, the manual says: “You can use multi-display to display up to eight different Camera views of your application on up to eight different monitors at the same time.What is an isometric camera in Unity? ›
Isometric view: pseudo 3D view / 2.5D view / 3 quarter view. -- is meant to show more than one sides of the object at a time with no depth. -- 2D projection of 3D environment. Camera setup for isometric view: 1.How do I see what my camera sees in Unity? ›
- Select the Camera (GameObject) that you would like to look through while in the Scene view.
- Then go to the "GameObject" Menu and select "Align View to Selected."
The Cinemacine Virtual Camera is a component that you add to an empty GameObject. It represents a Virtual Camera in the Unity Scene. Use the Aim, Body, and Noise properties to specify how the Virtual Camera animates position, rotation, and other properties.What is viewport in Unity? ›
Viewport is used when you need to be sure something will show up the same on all different screen sizes and shapes. It is independent of the pixel width and height of the exact screen in question.How do I get camera preview in Unity? ›
select the camera you want the preview from. look at the inspector. expanding the rollout of the section "camera" (it should be right under "Transform") toggles the preview window at the bottom right of the scene-view.How do I add a camera to my script in Unity? ›
First, locate and select “Main Camera” in the Hierarchy tab on the left hand side of the Editor. Drag your script from the Projects tab and drop it on the Main Camera GameObject. In the Inspector window, with Main Camera selected, you should see that the script was assigned to the Main Camera as a Component.What is a camera in Unity? ›
A Camera is a device through which the player views the world. A screen space point is defined in pixels. The bottom-left of the screen is (0,0); the right-top is (pixelWidth,pixelHeight). The z position is in world units from the Camera. A viewport space point is normalized and relative to the Camera.How do I make my camera vertical in Unity? ›
Cameras in Unity don't have a concept of vertical or horizontal orientation. They're merely viewports into your game's world. It's the player that has a screen size with either a fixed or (in the case of mobile) rotating aspect ratio. What Joe said is the best way to work.How do I set up a scene in Unity? ›
Go to File and click New Scene to call the New Scene dialog. You can also call up the dialog box with Ctr/Cmd + N. Select a template from the list. Enable Load Additively if you want Unity to load your new scene additively.Is it easy to animate in Unity? ›
Unity allows you to create simple animations using a standard set of tools. In this tutorial, you'll use Unity's keyframes, Playhead, Animation Timeline, and Animation Curves to create simple animations. Avatars are definitions for how animations affect the transforms of a model.Can you animate directly in Unity? ›
Animations can come from external software like Maya, 3ds Max, or Blender and simple animations can be created in Unity. Try creating a custom animation on a GameObject in your own Scene.What is a camera script? ›
noun. : a cue sheet indicating the various camera positions to be used in a telecast.
Unity Capture is a Windows DirectShow Filter that allows you to stream a rendered camera directly to another application.