HOME
Network Attached Video Cameras
Vivotek IP3111
Price is $659
MPEG4 Network Camera, Easy to use and high quality remote video surveillance system on TCP/IP network. Self-contained network camera comprises an MPEG4 compression engine, network servers and color camera. Optimal synchronization of audio & video for best effect.  High quality audio compression engine. Three ways to transmit video & audio with auto detector. Extension for auto iris lens Video motion detection with 3 sensitive windows. Support dial-in and dial-out via external modem. Configurable maximum bandwidth.  Automatic remote image retrieval and storage via e-mail and FTP with event triggering. IP3111:NTSC

Vivotek IP3132
Price is $295
MPEG4 Network Camera with audio, Easy to use and high quality remote video surveillance system on TCP/IP network.  Self-contained network camera comprises an MPEG4 compression engine, network servers and color camera. Optimal synchronization of audio & video for best effect. High quality audio compression engine. Includes ST3201 software viewing and recording software.

Sony SNCRZ30N 
Price is $1,615
Combining network functionality with Pan/Tilt/Zoom (PTZ) capability, M-JPEG type camera takes remote monitoring and general IT applications to the next step by offering the flexibility to see almost anything within the camera's range and field-of-view over an ordinary TCP/IP network. By simply using a popular web browser like Microsoft™ Internet Explorer, images and the PTZ movement of the SNC-RZ30N camera can be controlled using a PC at any location and at any time without the need for any additional software or plug-ins. In addition, installation and operation of the SNC-RZ30N camera is easy thanks to its browser based set-up menu and user-friendly GUI (Graphic User Interface).

IQeye3L012Starter 
Price is $1,435
IQeye3 camera starter kit (one per customer) with 12 mm lens, IQpoet3, power supply, and Ethernet cables. This intelligent imaging system includes a 1.3 Megapixel color imager. Capable of over 1.8 Mpixels/second JPEG performance. Provides Digital pan, tilt, zoom.  1/2" 1288 x 968 CMOS digital imager, Up to 1,800,000 pixels per second performance, 1.3 fps at 1288 by 968. Selectable windowing and subsampling, AGC or selectable gain, Adjustable spot meter window, Configurable color balance, Variable JPEG compression ratio, Text and graphic overlays. Ethernet: RJ45 10/100BASE-T.

 
What are MPEG and JPEG formats and What's better
It all depends on your application
http://www.kintronics.com/neteye/neteye....

So what are these things called JPEG and MPEG? Video cameras use either MPEG or M-JPEG (Motion JPEG) compression to reduce the size of the video files that are sent to your computer.  Why do we need compression?  We need compression because the raw video picture contains a lot of data. If you plan to store the video, the amount of data per frame dramatically affects the size of the storage system you need. JPEG can provide about 20:1 compression; so for example, a 2MB image can be compressed to about 100K. Should we use MPEG or M-JPEG? Well it depends on your application. 
 
M-JPEG (Motion JPEG) is the compression of choice for video surveillance as it prioritizes image quality over frame rate, has low latency (delay between actual activity and the presented video) and shows graceful degradation at bandwidth constraints and packet loss. M-JPEG guarantees a defined level of picture quality, which is vital in most security applications. As M-JPEG is also a less complicated compression, the number of available third party applications available is much higher. In contrast, MPEG-4 is more suited for applications where full frame rate and synchronized sound are more important than image quality, latency and resilience.

Let’s take a look at some real world examples.  Suppose you would like to monitor about 20 locations in a school.  You want to be able to store the video so you can review it at a later time. In this application you really don’t need real-time video and can probably use a frame rate of about one frame per second.  You would like a clear picture, so you can see not only a picture of a person but also identify a face. You would like to be able to view the video on a PC using a standard web browser.  M-JPEG is better in this application because it will provide a clear picture and the stored video can be easily and quickly retrieved. 
 
Here’s another example.  Suppose you have a remote store and you would like to monitor activity. You would like to see and hear what’s going on at this remote location.  In this case MPEG would be a better choice since it provides synchronized video and audio capability and provides better compression at higher frame rates. It will work better over the slower telephone connection.
 
Here are some details about M-JPEG and MPEG.

M-JPEG
 
 M-JPEG is a video format that uses JPEG compression for each frame of video. Video is made up of a stream of pictures or frames. A standard TV contains 30 frames per second to create a smooth motion picture.
 
JPEG (pronounced "jay-peg") is a standardized image compression mechanism. JPEG stands for Joint Photographic Experts Group, the original name of the committee that wrote the standard. JPEG is designed for compressing full-color or gray-scale images of natural, real-world scenes. It works well on photographs, artwork, and similar material; not so well on lettering, simple cartoons, or line drawings.
 
JPEG is "lossy," meaning that the decompressed image isn't quite the same as the one you started with. (There are lossless image compression algorithms, but JPEG achieves much greater compression than is possible with lossless methods.) JPEG is designed to exploit known limitations of the human eye, notably the fact that small color changes are perceived less accurately than small changes in brightness. Thus, JPEG is intended for compressing images that will be looked at by humans. If you plan to machine-analyze your images, the small errors introduced by JPEG may be a problem for you, even if they are invisible to the eye. A useful property of JPEG is that adjusting compression parameters can vary the degree of lossiness. This means that the image-maker can trade off file size against output image quality. You can make *extremely* small files if you don't mind poor quality; this is useful for applications such as indexing image archives. Conversely, if you aren't happy with the output quality at the default compression setting, you can jack up the quality until you are satisfied, and accept lesser compression.
 
How JPEG Works:
Figure one describes the JPEG process. JPEG divides up the image into 8 by 8 pixel blocks, and then calculates the discrete cosine transform (DCT) of each block. A quantizer rounds off the DCT coefficients according to the quantization matrix. This step produces the "lossy" nature of JPEG, but allows for large compression ratios. JPEG's compression technique uses a variable length code on these coefficients, and then writes the compressed data stream to an output file (*.jpg). For decompression, JPEG recovers the quantized DCT coefficients from the compressed data stream, takes the inverse transforms and displays the image. Figure 1 shows this process.


 
For more such information, see the JPEG FAQ at http://www.faqs.org/faqs/jpeg-faq.
 
 
MPEG
 
The Moving Picture Experts Group (MPEG) is the organization that defined the MPEG standards. After the success of MPEG-1, the group has worked to produce even better and more efficient versions of it. Their first work titled ‘Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s’ formed the basis of what is now known as the MPEG-1 standard.
 
Background:
To understand the motivation behind all the work, consider the data requirements of NTSC video. The NTSC standard is used in the USA to send video signals to our TVs. It broadcasts 352 by 240 pixel video at 30 frames/sec and 24-bit pixel depth. Without any compression, it needs more than 60 Mbps of bandwidth to transport all its data, which, by any standards, is enormous. This is OK for our analog TV sets, but not practical for sending digital data over a telephone line or storing the video on our computers. A more efficient approach is to compress this data so that it can be transported at a much lower bandwidth, and yet be broadcast in real time.
 
The MPEG-1 standard uses a mere 1.5 Mbps of bandwidth to broadcast live audio/video. It can be used in CD-ROMs to create Video-CDs. The MPEG-1 Audio layer 3 has been the most widely adopted, and is today more commonly known as MP3.
 
The MPEG standards group has continued to develop better compression schemes that provided higher quality and higher bit rates. MPEG-2 standard was released in 1994. It’s similar to MPEG-1, but it has support for higher bit-rates and thus higher (read broadcast) quality video. It is used in DVDs and digital television broadcasts. MPEG-3 was developed to cover HDTV, which would support higher bit rates to the order of 20-40 Mbps. It was, however, later discovered that MPEG-2 could be tweaked to fulfil this HDTV objective. Therefore, the work on MPEG-3 standard was abandoned.
 
The next big standard, MPEG-4, also called ‘Coding of Audio-visual objects’ was standardized in 1998. It differs from the earlier versions of MPEG in that it enables coding of individual objects. It is no longer necessary to think of an image as a series of rectangular blocks; the blocks can be of any arbitrary shape and thus each block can be used to represent individual real life objects like people or ball, which couldn’t be accurately described by a rectangle. This makes the recording of changes of that particular object much simpler as we are restricted to the object itself and not any of its surroundings, which might be the case while using rectangular blocks. One of the major aims of the MPEG-4 standard is to deliver high-quality digital content using as little bandwidth as possible.
 
As the amount of digital audio/video grows, so do the difficulties in archiving, searching and retrieving the required information. Searching audio or video is not as easy as searching text. MPEG-7 was conceived in 1997 to address this problem. Formally called ‘Multimedia Content Description Interface’, it does not describe any new coding/compression techniques; instead it defines a standard in which to store information about the digital content and to make it searchable. MPEG-7 can be thought of as a way to store meta-information, i.e., information about information. It is designed to complement MPEG-4 and its predecessors and not replace them. Work on this standard is still continuing and it would be some time before we see products implementing it.
 
The big picture
Work on MPEG-21 or ‘Multimedia Framework’ was started in 2000 to define a big picture of the whole multimedia environment. It aims to describe a multimedia framework where interoperability would be the key—the consumer can use the content without worrying about media formats, CODECs and the likes. It is a very ambitious attempt and divides the multimedia world into four categories: ‘Users’ (anybody on the network) accessing ‘Digital Items’ (the content itself) and executing on them ‘Actions’ that generate other digital items as part of a ‘Transaction’. MPEG-21 also aims to address the issue of content-protection and licensing by implementing techniques, which uniquely identify any digital content globally. Work on this standard is still in its infancy stage and it would be quite a while before it fulfils its exciting promises.
 
How MPEG works
The compression technique used in the MPEG-1 compresses each frame and then compresses the adjacent frames by noting just the change in video from frame to frame. 
 
The frame compression takes into account that the human eye is not sensitive to certain changes in color. Studies have shown that the human eye is more sensitive to changes in luminance (Y) than the chrominance (CrCb) components.  Compression is achieved by discarding some of the information stored in the CrCb components. This is called "down sampling" of data and is carried out by averaging out the pixel values in the chrominance components in such a way that a single value is shared by multiple pixels.
 
Frame to frame coding techniques are based on the knowledge that most frames are similar to the ones preceding as well as succeeding them. This means that most of these frames can be transmitted as differences between their neighbors, which in-turn means that a lot less information has to be transferred.
 
The first frame is (obviously) transferred as it is. This type of a form is self-sufficient and is called an I (Intra) frame. Subsequent frames can either be another I-frame (no relation with the preceding frame, in case the changes are too many and starting afresh would be better), a P (Predicted) frame, which depends on the preceding frame or a B (Bi-directional) frame, which depends on both the preceding as well as the succeeding frame. Frames are divided into rectangular blocks and the difference in each of these blocks are calculated and transmitted depending upon the type of the frame (I,P or B).
 
In video surveillance applications that require storage of the video, MPEG is not as easy to deal with as M-JPEG. To find a particular frame in MPEG requires first finding the “I” frame and then moving along to the exact time period you would like. When you retrieve M-JPEG video you can go to the exact frame you want very quickly and easily.
 
If you would like more information about this and the products available, please don’t hesitate to contact us at 1-800-431-1658 or 914-347-2530 or send us an Email at news@kintronics.com
 

[PRINTER FRIENDLY VERSION]
Published by Bob
Copyright © 2003 Kintronics, Inc.. All rights reserved.
For more information, please contact us 1-800-431-1658 or 914-347-2530 (outside the USA) or by email news@kintronics.com
TELL A FRIEND
Powered by IMN