FB-f-Logo blue 50    YouTube-icon-full color  TwitterLogo 55acee  In-2CRev-34px

TS5 Technical Data

  • icon Overview
  • icon Specifications
  • icon Record Tables
  • icon Videos
  • icon Contact Us
 

TS5L, TS5S, TS5H, and TS5Q



bluedotHandheld, battery-powered, with built-in

   touchscreen LCD


bluedotFour models: from L: 800 x 600, to Q: 2560 x 2048


bluedotAdvanced 5-Megapixel CMOS sensor


bluedotMultiple recording modes to suit any situation


bluedot
Control via touchscreen, FasMotion PC/Mac app,

   or web browser

1TS5_i_White_640x425
1TS5_i_White_640x425
2TS5-shoot2-01
3TS5_LCD
4Side Connectors
5USB-Flash-drive
6Gigabit Ethernet
 

Standard Record Mode: Classic High-Speed Imaging - The TS5H records beautifully sharp 1920 x 1080 video at 634 frames per second, while the TS5Q records amazing 2560 x 1440 video at 359fps, in vivid color or monochrome. Smaller resolutions may be recorded at faster frame rates: 720p @ 1400fps, 800 x 600 @ 1677fps, etc. Binning and sub-sampling features of the sensor give the TS5 great flexibility and sensitivity.

 

FasFire Mode - Ultra-fast save times to the onboard media while simultaneously recording high-speed bursts of hundreds or even thousands of images at a time, the TS5 is always ready for the next high-speed snapshot!

 

Long Record Option - Any model TS5 ordered with the "D" (dual mode) option can record continuously at high speed for many minutes, or hours at reduced resolutions.

 

Portability - With its small form factor, battery operation, large built-in LCD touch screen and intuitive on-screen menu system, the TS5 puts the power of both a traditional high-speed camera and long-record system into the palm of your hand. The TS5 is a true point-and-shoot high-speed camera solution.

 

Flexible Control – The TS5 can be operated as a self-contained, handheld camera, or controlled over Gigabit Ethernet via Fastec FasMotion software on your PC or Mac or via the built-in web interface with your favorite web browser on your PC, Mac, tablet, or even your smartphone.

 

High-Performance Image Transfer – The FasMotion application makes workflow a breeze with transfers of uncompressed images via Gigabit Ethernet at rates of 50–80MB/s to moderately equipped PCs.

 

Multiple Storage Options – The TS5 features both a USB port and an SD port for quick and easy image downloads to USB flash drives, SD cards, or portable hard drives. The built-in SSD provides up to 2TB of lightning-fast non-volatile internal storage.

 

Datasheet

 
 
Camera Specifications
SYSTEM DESIGN Handheld, battery-powered, portable with touchscreen LCD
SENSOR 12-bit 5MP CMOS sensor with 5µm square pixels, color or mono
SENSOR MODES Standard, binning 2x2, 4x4; Sub-sampling 2x2, 4x4; 2x bin + 2x sub
MAXIMUM RESOLUTION TS5-Q: QSXGA 2560 x 2048; -H: HD 1920 x 1080; -S: SXGA 1280 x 1024; -L: SVGA 800 x 600
LIGHT SENSITIVITY 1600 to 12,800* ISO mono, 800 to 6400* ISO color (depending on mode)
SHUTTER Global electronic shutter from 3μsec to 41.667ms
IMAGE MEMORY 4GB (std.) or 8GB (optional)
REMOVABLE STORAGE SD card (SDHC: 32GB maximum), USB flash drive
FILE FORMATS Stacks – BMP, DNG, JPEG, TIFF, TIFF(raw); Video – AVI, CAP(raw); Still – JPEG
LENS MOUNTS C-mount (std.), F-mount or PL-mount (optional)
BUILT-IN MONITOR High resolution, 178mm (7”) diagonal LCD
PC COMMUNICATION PORTS USB 2.0 device (micro-B), Ethernet (10/100/1000Base-T)
CONTROL SOFTWARE FasMotion (PC/Mac application), web interface (browser on all platforms)
SIX EXTERNAL I/O PORTS Markers, Trigger In/Out, Sync In/Out, Arm In/Out (LVTTL (3.3V) or switch closure)
MARKER DATA VIEWS Camera display info line, playback timeline, FasMotion o-scope mode, XML file
VIDEO OUT HDMI (1080p60, 1080p30, 720p, 480p)
CONSTRUCTION Anodized machined aluminum housing
POWER Rechargeable Internal Li-ion battery, or 10-26 VDC external
POWER CONSUMPTION 42W maximum
OPERATING ENVIRONMENT +5°C to +40°C
SIZE & WEIGHT 228mm (9.0”) W x 114mm (4.5”) H x 89mm (3.5”) D. 1.8 Kg (3.9 lbs.)
Optional Features
WiFi 802.11 b/g/n, security: open, WEP, WPA(2) - PSK
BUILT-IN STORAGE Solid State Drive (SSD): 256GB, 512GB, or 1TBcapacy

RecordTable TS5

 

TS5/IL5 Video Gallery

 

operator
Please take a few minutes to fill out our form for more information.
Need information immediately?  Call us at +1 858-592-2342

If you have any questions or need more information,
please fill out the form below.

Glossary

Here are some definitions of terms you may find useful in your search for the right high-speed video solution.

Frame Rate Lighting Techniques
Sensor Dimensions Lighting Sources
Exposure Type of Lighting
Depth of Field Tungsten
Sensitivity Carbon Arcs
Record Time  Gas Discharge
Resolution Arc Discharge
Record Modes Color
Time Magnification Color versus Monochrome

Frame Rate
Frame rate, sample rate, capture rate, record rate and camera speed are interchangeable terms, it is often shortened to the acronym “fps”.  Measured in frames per second, a camera’s speed is one of the most important considerations in high-speed imaging.  The frame rate to use when making a recording should be determined after considering the speed of the subject, the size of the area under study, the number of images needed to obtain all the event’s essential information, and the frame rates available from the particular high-speed camera.  For example, at 1,000 fps a picture is taken once every millisecond.  If an event takes place in 15 milliseconds, the camera will capture 15 frames of that event.  If the frame rate is set too low, the camera will not capture enough images.  If the frame rate is set higher than necessary, the camera’s on board storage may not be able to store all the necessary frames.

In most high-speed cameras higher frame rates result in lower resolutions thus reducing the area of coverage.  This happens when a camera’s frame rate is set higher than it’s ability to provide a full-frame coverage.  At the higher record rates, the height and/or width of the image is reduced, and in return the frame rate can be increased by multiples of ten to fifteen times the camera’s full frames per second recording rate.  When considering the frame rate performance of a high-speed camera, be specific about your requirements.  And look closely at a manufacturer’s specification sheet to see what the true resolution is at any given frame rate.
Return to top>

Sensor Dimensions
The size of the image sensor in a camera is important to know.  Some common size sensors include 1/2 inch, 2/3 inch and 1 inch.  The 1-inch sensor has an effective width of 12.8 millimeters or larger, while the 2/3-inch sensor has an effective width of 8.8 millimeters.  A lens that works properly on a camera having a small sensor may not produce a large enough image to work correctly on a camera having a large sensor.  This is due to the distortion in the fringe areas of the lens.  Knowing the width of a sensor prevents image blur because users can calculate parameters such as the correct exposure time.  The sensor’s width also allows users to calculate the depth of field for a given aperture.

Exposure
Many factors influence the amount of light required to produce the best image possible.  Without sufficient light, the image may be:

— under-exposed, detail is lost in dark
— unbalanced, poor color reproduction
— blurred, due to the lack of depth-of-field


The time that the imaging sensor is exposed to light depends on several factors.  These factors include, lens f-stop, frame rate, shutter speed, light levels, reflectance of surrounding material, imaging sensor’s well capacity, and the sensor’s signal-to-noise (SNR) ratio.  All of these factors can significantly impact the image quality.  An often-overlooked factor is the exposure time, also known as the shutter speed.

The exposure time, shutter speed and shutter angle are interchangeable terms.  The exposure time for mechanical shutters is set in terms of number of degrees that it is open.  The exposure time for electronic sensors is either the inverse of the frame rate if no electronic shutter exists or the time that an electronic shuttered sensor is exposed in microseconds.  Shown below are the relationships for defining the exposure time:

mechanical shutter = (revolutions per second x angle/360)
no shutter = 1/frame rate
electronic shutter = period of time that the sensor is “integrating”

Return to top>

The exposure time determines how sharp or blur free an image is—regardless of the frame rate.  The exposure time needed to avoid blur depends on the subject’s velocity and direction, the amount of lens magnification, the shutter speed or frame rate (whichever is faster) and the resolution of the imaging system.

A high velocity subject may be blurred in an image if the velocity is too high during the integration of light on the sensor.  If a sharp edge of an object is imaged, and the object moves within one frame more than 2 pixels or a line pair, the object may be blurred. This is due to the fact that multiple pixels are imaging an averaged value of the edge.  This creates a smear or blur effect on the edge.  To get good picture quality, the shutter speed should be 10x that of the subject’s velocity.

The lens magnification can influence the relative velocity of the subject being imaged.  The velocity of an object moving across a magnified field-of-view (FOV) is increased linearly according to the magnification level.  Instinctually, if an object is viewed far away, the relative velocity in the FOV is less than that viewed next to the object.
Return to top>

A proper shutter speed may be calculated as follows:

Exposure (shutter rate) 2X Pixel Size / Vr

Where:

Vr = sensor dimension x (field-of-view / object’s velocity )
Pixel Size = pixel dimension / total pixels.


Note: pixel dimension should correspond to the dimension used for the total pixel count.

 

If the object’s velocity, the field-of-view, the imaging sensor’s dimensions and pixel count are known, the shutter speed required to produce a sharp image can be calculated.  The relative velocity (Vr) at the sensor can be calculated by reducing the subject’s velocity by the optical reduction at the sensor.  The pixel size must be calculated by dividing the sensor size in the dimension of interest (x or y).  Knowing that a relative velocity at the sensor plane that is less than 2 pixels or a line pair will produce a good image, we multiply the pixel size by two.  Therefore, the shutter speed is calculated by dividing the 2X pixel size by the relative velocity (Vr).  The inverse yields the minimum shutter speed or in the case of an imaging system without a shutter, it is the minimum frame rate for sharp images.

Depth of Field
Depth-of-field (DOF) is the range in which an object would be in focus within a scene.  The largest DOF is achieved when a lens is set to infinity.  The smaller the f-stop the smaller the DOF.  If the object is moved closer to the lens, the DOF also decreases.  Lenses of different focal lengths will not have the same DOF for a given f-stop.
Return to top>

Sensitivity
Most high-speed image sensors have a sensitivity that is equivalent to a film Exposure Index value of between 125 ISO and 480 ISO in color and up to 3200 ISO in monochrome.  The sensitivity is a very important factor for obtaining clear images.  An inexperienced user may confuse motion blur with a poor depth-of-field.  If the sensitivity of the camera is not high enough for imaging an object for a given scene, the lens aperture must be opened up.  This reduces the depth-of-field for the object to remain in focus.  As the object moves, it could take a path outside the area that is in focus.  This would then give the appearance of an object with motion blur.  However, in reality, it is out of focus.

In practice, a single 600-watt incandescent lamp placed four feet from a typical subject provides sufficient illumination to make recordings at 1,000 fps with an exposure of one millisecond (1/1,000 of a second) a f/4.  This level of performance is fine for many applications, although some demanding high-speed events have characteristics where greater light sensitivity may be preferred.
Return to top>

Record Time
The recording time of a high-speed video camera is dependent on the frame rate selected and the amount of storage medium available.  The continuing technological advances in DRAM technology make higher storage levels affordable, but DRAM is still a limiting factor if more than approximately 10 seconds of full frame recording at high speeds is required.  However, most high-speed events occur in such short duration that 2000 frames is usually more than enough to capture an event.  As memory chips get denser, the storage capacity will continue to increase in high-speed cameras.
Return to top>

Resolution
Resolution of a high-speed camera is generally expressed in terms of the number of pixels in the horizontal and vertical dimension.  A pixel is defined as the smallest unit of a picture that can be individually addressed and read.  At the present, high-speed camera resolutions range from 128 x 128 to approximately 2500 x 1600 pixels.
A rule of thumb for capturing high-speed events is that the smallest object or displacement to be detected by the camera should not be less than 2 pixels within the camera’s horizontal field of view.

The sensor resolution may be expressed also in terms of line pairs per millimeter (lp/mm).  The meaning of line pairs per millimeter is an expression of how many transitions from black to white (lines) can be resolved in one millimeter.  To calculate a sensor’s theoretical limiting resolution in lp/mm, take the inverse of two times the pixel size.  Shown below is the limiting resolution of a sensor with a 16 micron pixel.

Theoretical Limiting Resolution
= ( 1/ (2 x pixel size)) x 1000 = 1/(2 x 16) x 1000 = 31.25 lp/mm

Record Modes
High-speed cameras currently use two principal types of recording medium, DRAM memory and solid state storage media.  Most cameras use solid-state DRAM memory and the most useful recording mode with this memory is called continuous record.  In continuous record mode the camera records non-stop, replacing its oldest images with the newest images until an event occurs and triggers the camera to stop.  Further flexibility allows the operator to program exactly how many images before and after an event are saved.  For engineers and technicians trying to record something unpredictable or intermittent, the continuous-record with triggering is the only feasible method of capturing the event.
Return to top>

One of the most powerful, but least understood and hence least used features of many high-speed cameras is Record-On-Command (ROC).  ROC is powerful because images may be selected according to a user-supplied signal. The objective of the application example above is to capture over a thousand images of a box lid being closed. There is an intermittent error that causes the lid to be damaged during the closing process.  To capture an intermittent problem such as this one is difficult to trigger since the damage may only be discovered further down the packaging line.  By using a tachometer pulse off the shaft driving the closing mechanism, precise timing can be derived for indicating the exact position when the lid is being closed.  This timing pulse is used to qualify the image for storage in memory. If the pulse exists, images are written into the high-speed camera’s memory. In absence of the pulse, no images are recorded.  Therefore, only images of the lid in an exact position will be recorded. The recording continues until memory is full.  In addition, a range of motion may be recorded if the pulse is longer than a single frame period.  In other words, if the high-speed camera is operating at 1000 fps and the pulse into ROC is 5.5 milliseconds long, 5 images per pulse will be stored.  The use of this recording technique is only limited by the user’s imagination.  Indeed, it is one of the most powerful but least understood recording techniques.
Return to top>

Another less common recording technique for high-speed camera’s with DRAM memory is Slip Sync.  This recording technique is used to operate the high-speed camera at a frame rate that is defined by an external input signal.  Again, we will look at the application above to explain the operation. Slip sync imaging is very similar to the method of imaging with a strobe synchronized with an object that has a repetitious movement. In our example, the user would input a frequency that was synchronized to the tachometer.  As the frequency is varied, the images captured will be sync with the tachometer in a positive or negative direction.  This allows any position of the lid movement to be observed and captured.  Another example would be that of an accelerometer voltage that is feed to a voltage-to-frequency converter.  As the acceleration changes, so does the frequency out of the converter. This frequency then drives the frame rate of the high-speed camera.  Why should this interest us? Objects that move faster need a higher frame rate for recording than objects that move slower.  Therefore, the rate of change is directly proportional to the rate of recording.  Application examples include a crush test for materials using a strain gauge, a flame propagation study in a combustion engine using a pressure sensor, an automotive car crash using an accelerometer or an explosion that has a light sensor detecting the detonation.  This mode of recording is uniquely possible with DRAM based high-speed cameras.
Return to top>

Time Magnification
The goal in using a high-speed camera is to obtain a series of pictures that are observable in slow motion after capturing the pictures of a high-speed event. Time magnification describes the degree of “slowing down” of motion that occurs during the playback of an event. To determine the amount of time magnification, divide the recording rate by the replay rate. For example, a recording made at 1,000 fps and replayed at 30 fps will show a time magnification of 33:1. One second of real time will last for 30 seconds on the television or computer monitor. If the same recording were replayed at only 1 fps, that one-second event would take more than 16 minutes to play back! Most systems allow replay in forward or reverse with variable playback speeds. Therefore, it is important to capture only the information that is necessary otherwise, long recordings can take hours to playback.
Return to top>

Lighting Techniques

Lighting an application properly can produce significantly better results than if poor light management is used. There are four fundamental directions for lighting high speed video subjects: front, side, fill and backlight. Placing a light behind or adjacent to a lens is the most common method of illuminating a subject. However, some fill lighting or side lighting may be needed to eliminate the shadows produced by the front lighting. It is advisable to have the light behind the lens to avoid specular reflections off the lens. Side lighting is the next most common lighting technique. As the name implies, the light is at an angle from the side. This can produce a very pleasing illumination. In fact, for low contrast subjects, a low incident lighting angle from the side can enhance detail. Fill lighting may be used to remove shadows or other dark areas. Fill lighting may also be used to lessen the flicker from lamps that have poor uniformity. Fill is from the side or top of a scene. Backlighting may be used to illuminate a translucent subject from behind. It is not used that frequently in high-speed video. However, certain applications such as microscopy, web analysis or flow visualization are well suited for backlighting. Knowing and using when appropriate all of these techniques is important for getting high quality images.
Return to top>

Lighting Sources
There are a number of lighting sources available for high-speed video. Some care must be taken in lighting selection due to the several factors. The factors that need to be considered included the type of light, the uniformity of the light source, the intensity of the light, the color temperature, the amount of flicker, the size of the light, the beam focus and the handling requirements. All of these factors are important in matching the light to the application.

Type of Lighting
Lighting types can be identified by two characteristics; physical design and the method of producing the light. The physical characteristics include lens, the reflector, packaging and the bulb design. The method of producing light includes tungsten, carbon arc, fluorescent and HMI.

Tungsten
Tungsten lighting is also referred to as incandescent lighting. Tungsten color temperature is 3200K. A common type of tungsten lamp is called a halogen lamp. Halogen is a hot light source since the bulb must heat the regenerative tungsten. Tungsten lamps are efficient in their light output but care needs to be taken when using them due to the high heat of the lamps and housings.

Carbon Arcs
This type of lamp forms an arc between two carbon electrodes. The arc produces a gas that fuels a bright flame that burns from one electrode to the other. This type of lighting is expensive and rarely if ever used in high-speed photography.

Gas Discharge
The fluorescent tube is one type of gas discharge lamp. At the end of each tube are electrodes and the tube is normally filled with argon and some mercury. As current is applied at the electrodes, the argon gas vaporizes the mercury. The mercury emits an ultraviolet emission which then strikes the side of the tube that is coated with a phosphor. The phosphor then transforms the ultraviolet to visible light. Most fluorescent lamps emit a dominant green hue which is not very suitable for a balanced light source. Additionally, the discharge produces a non-uniform light that is easily detected as a 60-cycle flicker when playing images back from a high-speed high-speed camera.

Arc Discharge
HMI (mercury medium-arc iodide) is the most common lamp in this class of lighting. As current is passed through the HMI electrodes, an arc is generated and the gas in the lamp is excited to a light emitting state. The spectrum of light emitted includes visible as well as ultraviolet. This light source typically has a UV filter to block the harmful emissions. The HMI light is a balanced light source which generates an intense white light. If a switching ballast is used with the HMI, it produces a uniform light with very low flicker. Other types of ballast are not as well regulated and not as useful for high-speed photography.
Return to top>

Color
Understanding color is difficult but necessary even for monochrome imaging. The color of light is determined by its wavelength. The longer wavelengths are hotter in color (red). The shorter wavelengths are cooler (blue). Color perception is a function of the human eye. The surface of an object either reflects or absorbs different light wavelengths. The light that the human eye perceives is unique in that it produces a physiological effect in our brain. What is red to one person may have a slight difference of perception by another person. Terms that further describe the color of an object are hue, saturation and brightness. Hue is the base color such as red, blue violet, yellow and others. Saturation is the shades that vary from a basic color to that of a different shade. An example of a hue would be green and a saturated color would be lime (light green). Brightness also known as luminance is the intensity of the light. The subject of color would take an entire book to fully explain the science. However, studying a color chart can give the user some insight into the composition a color scene.
Return to top>
Color temperature is a common way of describing a light source. Color temperature originally derived it’s meaning from the heating of a theoretical black body to a temperature that caused the body to give off varying colors that ranged from red hot to white hot. Lord Kelvin developed this term and his name was associated with the unit measure. Some high-speed cameras have color-balancing circuitry that allows the camera’s sensor to be set for to the color temperature of the light being used.
Return to top>

Color versus Monochrome
Most of the early high-speed filming was done with black-and-white. Once color film became available, the use of black and white film declined. The use of high-speed color film set the format standard that video has attempted to meet. Over the years, monochrome images have been all that could be recorded on most high-speed cameras. Today’s high-speed cameras can produce images that replace color film for some high-speed applications. Full 24-bit color images are now possible from high-speed cameras. To understand the strengths and weaknesses of both color and monochrome in varying high-speed video applications, some background must be discussed.

There are various methods of producing color in high-speed video. The two most widely used techniques are beam splitters and color filter arrays. True color means that the primary colors and all the saturations are possible. This technique is costly since the electronic circuitry is tripled with the need for three imaging sensors. The alignment of the three sensors must be very precise otherwise, mis-registration will occur. The second and most common technique is a cost saving compromise. Color Filter Arrays (CFA) are more cost affective because they only use one imaging device. There are individual color filters deposited on the surface of each pixel. There is some combination of Red, Blue and Green or a complimentary color scheme. Each pixel is isolated to a certain color spectrum. Although the pixels are filtered, the raw data must be interpolated for solving the missing pixels in each color plane.

Now that the main methods for producing color have been discussed, we need to review why a user would chose between recording in color vs. monochrome. Generally, monochrome images produce better image quality and monochrome cameras are more sensitive because they don’t have the Color Filter Array attenuating the light. The resolving capability of a monochrome sensor is also better than that of CFA image sensors. This is due to the fact that there is no interpolation involved. The one disadvantage of a monochrome image is the loss of color differentiation. The subtle change in gray levels is harder to observe than a change in hue or saturation. Color is valuable for differentiating shades which may yield useful information. Most high-speed photography is done with monochrome cameras for the reasons listed above.
Return to top>

Fastec manufactures portable, point and shoot digital video cameras for motion analysis in plant maintenance and field service troubleshooting, research, military test and instrumentation and sports training.

Category: Glossery