Introduction

What is the Kairos Human Analytics SDK?

LATEST VERSION 1.1.0 UPDATED 11/15/2016

The Kairos Human Analytics SDK allows you to detect and use data from human faces in your products. It uses best in class face analysis algorithms and machine learning to deliver fast and accurate real-time results.

The SDK supports a range of files, images, and streams:

  • Analyze most images and videos
  • Popular file types and formats are supported
  • Streams from most USB and IP cameras are supported

Learn more about Human Analytics - See all of Kairos' face recognition features.

Why use it?

Human Analytics is your foundation for building intelligent products that return actionable insights about people. It provides meaningful data about faces in video, images and the real-world. This can inform business decisions that go way beyond your traditional data analysis.

For example

You could measure and respond to the emotions and demographics of people experiencing your digital advertising. Alternatively, you could search through millions of faces to find a match. All this, and more, is possible with the Kairos Human Analytics SDK.

How does it work?

First, we scan the image or frame of video looking for all the faces. Next we look for the feature points, like the location of the eyes, eyebrows, nose and mouth, on each face.

Once we locate these feature points we are then able to determine things like emotional expressions, age, gender, and if people are wearing glasses or blinking. That information is then provided back in the form of various objects in the SDK.

In the following frames of a video, see below, we keep track of the location of each face and identify the the feature points. This means we can still measure faces even if parts of them are covered; the technical term is 'occlusion'. So, if someone scratches their nose, for example, it won't effect your results. All this keeps our algorithms fast and their resource consumption low.

alt text

What is this Cara thing?

Cara is the Spanish word for “face”. The face is the key to our emotions so we thought it was neat to use “cara” as the namespace for the Kairos Human Analytics SDK. When you see references to “cara” in object, method and class names it’s our nod to the humanity of the face.

 

 

Getting a License Key

To use the Kairos Human Analytics SDK you will need an active license key in your project.

Trial Licenses

Trial licenses are available by request as part of our ‘Business’ and ‘Enterprise’ plans. All we ask you for is your name, email and a description of your business needs - Learn more about our pricing plans.

Production Licenses

Once you've built your app and finished testing you are ready to distribute your app. It’s now time to get a production license.

Please email our customer team at This email address is being protected from spambots. You need JavaScript enabled to view it. to get your production license.

License Expiration

The license key is a XML text file that contains your name, email address and expiration date of the license. After the expiration date, the SDK will not operate and return a cara::LicenseExpiredException. This applies to both trial and production licenses. We typically generate production licenses expiration dates of 12 months.

We encourage you to design your app so you can insert new license files with little effort. Your app will never contact Kairos and your license will never end before the expiration date.

License File

When you receive a license file, it should be Installed in the <APP_PATH>\config directory in Windows and <APP_PATH>/config in Linux.

 

 

Requirements

Hardware Requirements

We designed the Kairos Human Analytics SDK to be ultra fast and lightweight. Its minimum specs require an Intel Dual Core Atom processor with 1GB of RAM. This will provide about 15 ft (5 meters) viewing distance. The recommended spec is an Intel Core i3 processor with 2GB of RAM. This can provide up to 30 ft (9 meters) viewing distance.

The SDK does not use a lot of RAM or hard drive space. There are minimal graphics requirements, so any integrated graphics should be fine.

The SDK is certified for Windows 10 and Visual Studio 2012 and higher and Ubuntu 14.04 and 16.04 with gcc. This list will be expanded over time as more platforms are tested and certified.

Development Requirements

The Kairos Human Analytics SDK delivers DLLs (Dynamic Link Libraries), or shared libraries, in different versions. For Linux and Windows we provide libraries and executables for the x86_64 bit platform. For Linux there is a non-debug version only (there is no need for a debug version).

We provide debug and release versions for Microsoft Windows. This makes it easy to create debug mode executables. We also provide versions of the libraries built with Visual C++ 12.0 (aka Visual C++ 2013) and Visual C++ 11.0 (aka Visual C++ 2012). Those will also compile in the most recent versions of these compilers.

The Windows 64-bit libraries are dynamically linked to the C runtime import libraries provided by the compiler. When compiling your application's object files in Win64, you must state the type of C runtime library. The SDK import libraries can be found underneath the \lib. You'll find the platform installation subdirectory in directories named after the compiler versions.

You'll see that debug versions of the libraries have a d at the end of their name. This helps differentiate them from release versions.

 

 

Installation

Download

To use the Kairos Human Analytics SDK you will need an active license key in your project.

You can request the latest version of the Kairos Human Analytics SDK for your platform from our website - Learn more about our pricing plans.

Install

Linux

To install on Linux download the SDK package and untar to the folder of your choice where you want to store the SDK. For example:

tar -xvf kairos-ha-sdk-linux_x64.tar.gz -C /opt/kairos/

You will need to install specific dev packages under ubuntu 14.04 for your applications to run execute successfully or you will get opencv errors and or SDK errors. Please see the README file that is included with your tutorial app, for more information.

Windows

After downloading the version for Windows, just unzip the archive to a folder of your choice.

Library Locations

When building applications with the Kairos Human Analytics SDK, you have to tell the system how to find the runtime libraries that make up the SDK. You must do this whether you are using Linux, Windows or any other platform.

Linux

All of the run-time libraries needed by your app, are located in the bin directory of your installation. You must configure the library location to set the LD_LIBRARY_PATH for ldconfig to get the SDK shared libraries. For example:

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/kairos/bin/cara/x64/gcc/human_analytics_sdk.so

This will update your library path to include the Human Analytics SDK shared library. The /opt/kairos/ represents the location where you installed the Human Analytics SDK after you downloaded it. You can put the shared libraries somewhere other than the default location. Yet, you do have to keep them in your LD_LIBRARY_PATH.

Please note

The following libraries are also required:

  • Boost
  • Opencv
  • LibXML
  • ZZIP

These libraries are shipped with your SDK and located in the bin folder as well. You will need to export their paths too. Please see the README that came with the tutorial, for more info about the full LD_LIBRARY_PATH you should export.

Please note

In addition to exporting your LD_LIBRARY_PATH, you will also need to specify the libraries mentioned above, when compiling your app or you will get method reference errors when you compile or try to run your application. Please see the included tutorial README for information regarding the specific gcc command but here is an example of including the libraries to link:

gcc -o emotions main.cpp -I/include -L/opt/kairos/bin/zzip -lzzip

-l is an option that you pass when compiling your app that tells the gcc compiler what libraries to link against. The full gcc command is included in the README file that came with your tutorial.

Windows

The run-times libraries are located in the bin directory of your installation. You will need to set the PATH environment variable for your run-time libs (DLLs) to be located by your app. If you do not do this, then you will need to copy the DLLs from the \bin\cara directory into you application base. This is ok for testing purposes but it is recommended to set the PATH when deploying your app.

 

 

Concepts

This section describes the features you'll encounter when developing with the Kairos Human Analytics SDK.

Media Source

This refers to the type of media that you want to analyze. The Kairos Human Analytics SDK supports media from video files and still images. It also supports live streams from USB webcams and IP cameras.

Video files can be in the format AVI, FLV, MJPEG, MOV, MP4, MPEG, WEBM, or WMV.

Image files can be BMP, JPG, or PNG. We currently do not support GIF.

We support most types of USB webcams and IP based cameras. As there are thousands of types of these cameras we don't keep an up-to-date list. In general, cameras that support MJPEG and H.264 cameras over HTTP, RTMP, and RTSP protocols should work fine.

If you have different format or encoding needs, please contact us at This email address is being protected from spambots. You need JavaScript enabled to view it..

Special note about still images

The Kairos Human Analytics SDK learns about the faces it is analyzing over time in a given piece of media. For example, the engine is able to understand someone who has a naturally smiling facial expression and normalize the results of that person versus another person who has a more neutral pose.

With images, because there is only one frame of data, there isn’t an opportunity to perform the same calibration and adjustment so the results of any one photo may not be representative of the person’s true intentions because we wouldn’t have time to learn if that is his or her natural expression or a real reaction.

Frame

A frame is one image or photo, or one single frame of a video file or video stream. The Kairos Human Analytics SDK looks for faces in the media source you provide it and then analyzes each face for emotion, landmarks, demographics, appearance, and tracking.

Impressions

An impression contains data such as demographics, tracking, and other information about a single image, pre-recorded, or live video stream.

Here is an example of how to parse impressions of the HumanAnalysisService class:

// How to access face coordinates from an impression
People people = human_analysis_service->getPeople();
for(Person person : people)
{
     int glances = person.impression.glances;
     double dwell = person.impression.dwell;
     int attention = person.impression.emotion_response.attention;
     Age age = person.impression.age;
     Gender gender = person.impression.gender;
}

Please note

When more robust information about a person in a scene is required, such as the FeaturePoints or landmarks of the face, you can use the Person struct defined in the CaraTypes.h which has access to to the face coordinates thru the Face object and other objects such as HeadPose or FeaturePoints.

Emotion

The Kairos Human Analytics SDK classifies 6 core human emotions and will return:

  1. Joy
  2. Surprise
  3. Sadness
  4. Fear
  5. Anger
  6. Disgust

When the SDK detects these facial emotions it will give you a corresponding value. This will vary depending on the intensity of the emotions being measured. Each emotion classifier is measured on a scale of 0-100. With ‘0’ being an absence of a particular emotion and ‘100’ being the maximum intensity of a particular emotion.

Demographics

Age and Gender detection is supported and is part of the Impression of a person and their enumeration types are defined in the header, CaraTypes.h.

Age is classified as an age group and will return the following enumeration types:

1. Age::CHILD (0-13 years old)
2. Age::YOUNG_ADULT (14-35 years old)
3. Age::ADULT (35-65 years old)
4. Age::SENIOR (65+ years old)

Gender is classified with two options and will return the following:

1. Gender::MALE
2. Gender::FEMALE

Appearance

Appearance is another impression value.

For example: how to analyze if a person detected is wearing glasses or not Glasses glasses = person.impression.glasses;

Will return:

1. Glasses::WITH_GLASSES
2. Glasses::WITHOUT_GLASSES

In the future we will be adding additional appearance detectors here. Please let us know if you have any special requests or needs by contacting This email address is being protected from spambots. You need JavaScript enabled to view it..

Tracking

Tracking is a component of an impression that revolves around the number of glances (times the person looked at the camera and looked away), dwell (how long they were in front of the camera), blink (if the person’s eyes were blinking), and most importantly attention (the percentage of time the person was looking at the camera).

For example: how to analyze if a person detected is blinking or not Blink blink = person.impression.blinking;

Will return:

1. Blink::BLINK
2. Blink::NO_BLINK

Landmarks

We capture landmarks on a person’s face through a process called 'Feature Point Detection'. You can access these via the Person struct after the processing of frames in the HumanAnalysisService.

With the Kairos Human Analytics SDK you can track a total of 49 points, in real-time, on each detected face. You can detect up to 20 faces, in parallel, when the processor is in tracking mode. ‘Face Feature Extraction’ and ‘Classification’ (these determine emotions) uses these landmarks. Note that the more faces you are tracking the more resources in terms of CPU will be required to process and track the features, emotions, demographics, etc. of each face.

Here are the landmarks we detect:

alt text

    No. Description:  
    1. leftEyeBrowOuterLeft
    2. leftEyeBrowInnerLeft
    3. leftEyeBrowInnerRight
    4. leftEyeBrowOuterRight
    5. rightEyeBrowOuterLeft
    6. rightEyeBrowInnerLeft
    7. rightEyeBrowInnerRight
    8. rightEyeBrowOuterRight
    9. noseBetweenEyes
    10. noseBridge
    11. noseBody
    12. noseTipTop
    13. leftNostrilOuterLeft
    14. leftNostrilInnerRight
    15. noseTipBottom
    16. rightNostrilInnerLeft
    17. rightNostrilOuterRight
    18. leftEyeCornerLeft
    19. leftEyeTopInnerLeft
    20. leftEyeTopInnerRight
    21. leftEyeCornerRight
    22. leftEyeBottomInnerRight
    23. leftEyeBottomInnerLeft
    24. rightEyeCornerLeft
    25. rightEyeTopInnerLeft
    26. rightEyeTopInnerRight
    27. rightEyeCornerRight
    28. rightEyeBottomInnerRight
    29. rightEyeBottomInnerLeft
    30. upperLipTopCornerLeft
    31. upperLipTopOuterLeft
    32. upperLipTopInnerLeft
    33. upperLipTopCenter
    34. upperLipTopInnerRight
    35. upperLipTopOuterRight
    36. upperLipTopCornerRight
    37. lowerLipBottomCornerRight
    38. lowerLipBottomInnerRight
    39. lowerLipBottomCenter
    40. lowerLipBottomInnerLeft
    41. lowerLipBottomCornerLeft
    42. upperLipBottomOuterLeft
    43. upperLipBottomInnerLeft
    44. upperLipBottomCenter
    45. upperLipBottomInnerRight
    46. upperLipBottomOuterRight
    47. lowerLipTopInnerRight
    48. lowerLipTopCenter
    49. lowerLipTopInnerLeft
  

Creating your first app

To start, first make sure the HA SDK is installed and your license file, either trial or production, is placed in the “config” directory.

Configuring your development environment to use SDK

Windows

Add the header files into the your project include directories. The include headers are located in a subdirectory include/ in the SDK.

Visual Studio 2010 and above:

  1. Under Project -> Properties -> Configuration Properties -> C/C++ -> General

    • In Additional Include Directories, add the full path or relative path to the include folder eg: C:\Program Files (x86)\Kairos\Kairos HA SDK\include
    • also , add the full path or relative path to the include folder for opencv eg: C:\Program Files (x86)\Kairos\Kairos HA SDK\include\opencv
  2. Under Project -> Properties -> Configuration Properties -> Linker -> General

    • In Additional library Directories, add the full path or relative path to the include folder eg: C:\Program Files (x86)\Kairos\Kairos HA SDK\lib\x64\vc11
  3. Under Project -> Properties -> Configuration Properties -> Linker -> input

    • Under the input item, in Additional Dependencies, include the 2 SDK libraries (api.lib and human_analytics_sdk.lib) or the debug versions of the DLL, if doing a debug configuration
  4. Set your environment PATH variable to the Human Analytics SDK DLL (located in bin/cara/x64/vc11)

  5. Set additional PATH’s to all of the remaining folders (libxml, boost. Opencv, zzip)

    • When setting the environment PATH variable, set the system variable, not the user PATH. Also make sure that you have administrator rights to change systems setting as you can be logged into your machine as an administrator and still not have rights to change system settings.
  6. Compile your app using a 64-bit debug or release configuration

    • If you receive a system error i.e - 0xc0000007b, this is most likely due to your build configuration being set improperly. Your build configuration must be set to 64-bit (x64) in order to use our SDK as it is 64-bit only and NOT backwards compatible with the 32-bit (x86) architecture
  7. Run (either from the IDE or debug/release folders) it with this command: emotions video somevideo.mp4

    • Make sure the models folder is relative to your app
    • Make sure your license has been included, either in the config folder or relative to the executable
    • Make sure to trap the CaraException to check when your license expires or if its missing from your app or some other error occurs. Please see the error handing section above more information.

Please note

The SDK was built using Visual Studio 2012 , however, you can use any IDE that suits your needs. If you are having issues building or running your debug or release build, you may be missing some critical components from your development environment. To resolve try any of the following:

  1. Download and install the Windows SDK (formerly known as the Platform SDK) for the platform you are developing on. ie Windows, 7, 8, Windows 10 etc
  2. Download and install the Visual Studio 2012 runtime redistributables(x64)

Linux

Please see the README file included within the Linux tutorial for more information

Initialize SDK

To get started, you must set the location of where your SDK license is stored. Here’s an example of how to do this:

HumanAnalysisService *human_analysis_service = new HumanAnalysisService(“license.xml”, <APP_PATH>);

The SDK must be able to find a valid license in order to be used. If the SDK can’t find the license in the SDK installation, then you will get an LicenseInvalidException or LicenseExpiredException, which means your license is corrupt or can’t be located. You can trap one these exceptions, which is recommended. If you wish, it is also possible to trap errors by catching a CaraException, which is a catch all for other exceptions that might be thrown, including the 2 just mentioned

The SDK is initialized internally with a set of defaults that optimize the SDK for immediate use. In the future, we may expose more settings as more features become available in later versions of the SDK.

Configuring the SDK

Once you get passed the basic initialization, you can also issue settings to the SDK in the same init step described above. These settings are defined in CaraTypes.h header, that you are already familiar with. Here is an example of how to set the maximum number of trackable faces:

HumanAnalysisService *human_analysis_service = new HumanAnalysisService(“license.xml”, <APP_PATH>,5);

The above line says to initialize the SDK service to detect and track up to 5 concurrent faces. This is set to a default of up to 20 faces and if you set this number higher, the service will always initialize to the max value. So you can set this value, or not set this value, or set it with a max of 1 to 20 faces. This setting is good if you know your set of faces or if you want to test performance, for example.

Opening video

The Kairos Human Analytics SDK works with live video, pre-recorded video, and images. The HumanAnalysisService validates your media input (could be one of, live video stream, video/image file, or single image) source, sets up a Device that is to be used to pull frames for processing in the FaceProcessor. Once you’ve initialized the SDK, as described above, you are ready to start using the media source See these examples:

Passing in an image such as a png or jpg:
human_analysis_service.initUsingImageSource(“image_file.png”);

Passing in a video source such as an mp4, mpeg, avi, or mov:
human_analysis_service->initUsingVideoSource(“video_file”);

Passing in a camera index of a USB webcam:
human_analysis_service->initUsingCameraIndex(0);

Passing in the location of an IP video stream (following example is for a FosCam IP camera):
human_analysis_service->initUsingCameraSource(“http://user:pass@<camera-ip-address>/VideoStream.asf”);

Please note

There are many different types of IP video cameras and protocols, and not all are supported. If you are having trouble connecting to your camera, try to open the camera with the free program VLC which is available at VideoLAN. If you have issues or find an unsupported format or camera, please contact us at This email address is being protected from spambots. You need JavaScript enabled to view it. to see if we can add support for it.

Process frame

The Kairos Human Analytics SDK makes it easy to process a frame after the Frame has been pulled from a Device. Since the Device is created and the Frame is pulled for you all that is needed is for a developer to call the processFrame method of the HumanAnalysisService to ensure that the NEXT frame is processed. If a video or live stream, a frame will be available up until the end of the video or until you turn your camera off or the camera stream is no longer bound. If you are processing a single image, then the NEXT frame will be empty. For live, pre-recorded or single image sources, you can use the isFrameEmpty method to determine when to exit your infinite loop. Below is a simple example taken from the tutorial app that shipped with your copy of the Kairos Human Analytics SDK:

// media file, can be a relative or absolute location
std::string media_file == “test.mov”;

// init the sdk for use
HumanAnalysisService *human_analysis_service new
human_analysis_service(“license.xml”,<APP_PATH>);

// infinite loop thru the video to process frames
while(1)
{

    human_analytics_service->pullFrame();

    // check if we should exit this loop
    if(human_analysis_service->isFrameEmpty()){
        break;
    }

    // process the frame
    human_analysis_service->processFrame();

    // get the people in this frame
    People people = human_analysis_service->getPeople();
}

It is possible to pass in a frame from a separate video capture process from outside of the SDK. Since has->processFrame is an overloaded method, you can use the appropriate method to pass in your custom frame along with a setting to calibrate this frame. By default, the internal normalizer is set to “CALIBRATION::VIDEO”. If you are processing an image, for best emotion and feature point results, you should use the type “CALIBRATION::IMAGE”. Although the default calibration type will work for images, for optimal results, the image calibration type should be used for images. Here is an example of how to indicate calibration when specifying a custom input frame:

cv::Mat frame;
frame =imread(imagefile);
human_analysis_service->processFrame(frame, CALIBRATION::IMAGE);

The above snippet tells the processor to accept your frame (based on an image) and adjust the output by using the IMAGE calibration algorithm. As you can see your custom frame must be an Opencv “MAT” type.

The above calibration type can be found in the CaraTypes.h header.

Please note

The “emotion” tutorial has more information about passing in a frame.

Do something with results

After you’ve processed a Frame as above, you will want to do something with the results such as, printing the landmarks for each frame as the video is being processed. Below is a small helper method that prints the landmarks returned from the FeaturePoint detector once a frame has been processed.

// Helper that outputs feature point landmarks of a person’s face
static void printLandmarks(const FeaturePoints& points, int show){
    if(show) {
        int ct = 1;

        cout << ", \"landmarks\":[";
        for (auto p : points) {
            if(ct < 48){
                cout << "{\"" << p.name << "\":{" 
                     << "\"x\":"<< p.x << ", \"y\":" << p.y << "}}, ";
            }else if(ct == 49){ 
                cout << "{\"" << p.name << "\":{" 
                     << "\"x\":"<< p.x << ", \"y\":" << p.y << "}}";
            }
            ct++;
        cout << "]";
    }
}

After you retrieve the people in a frame, here’s an example on how to get and print the feature points:

for(Person p : people){
    // get featurepoints from the person
    FeaturePoints fpts = p.FeaturePoints;

    // passing to the utility method define above
    printLandmarks(fpts, 1);    
}

Handling exceptions

During execution, you will want to check on the state of things, such as making sure your license is still valid so that you can take the appropriate next steps. Please see CaraExceptions.h header for information regarding exceptions that you will want to trap. Each exception comes with a message() method that specifies why the exception occurred.

try{
    // ..some code
    HumanAnalysisService h new HumanAnaysisService("license.xml", <APP_PATH>);

    // ..process the frame
    h.processFrame();

    }catch(CaraException& ce){
        // ..handling of the exception
        cout >> ce.what();
    }

When your camera is inaccessible or goes offline, a special type of exception is thrown. This exception is called the “CaptureError”. A “CaptureError” can also be thrown when processing from video file or image. The most common reasons for this error are:

  1. The camera is turned off
  2. The camera is in use by another process
  3. The camera stopped processing frames and suddenly went offline
  4. Video or Image doesn’t exist, is corrupt, or unreachable

It is recommended to trap this error in order to make sure your camera continues to process frames. Trapping this error gives you a convenient way to kickstart your camera back into action should any issues arise while your camera is running.

After trapping this exception, check the code, to see exactly what happened. That code is define as an enum and is one of:

  1. CAM_OFFLINE
  2. CAM_NOT_AVAILABLE

Distributing Your App

In Windows, you will need to distribute the entire contents of the “bin” and “config” folder along with your app. These folders should be placed in the same folder as your application’s base folder so that they can be found by the SDK.

For Linux apps, you should distribute the entire contents of the “bin” and “config” directories. These directories should be placed in the same directory as your application’s base directory so that they can be found by the SDK. In addition, you will need the script you use to launch your app, or your environment to add the location of all run-times libs in the bin folder to the LD_LIBRARY_PATH as described above in the “Library Locations” section.

 

 

Building the tutorial application

Included in your installation, you will find a tutorial folder. The tutorials are VERY simple examples that demonstrate basic usage of the Human Analytics SDK. What is demonstrated includes:

  • Constructing the main SDK object HumanAnalysisService
  • Setting the camera to use for capturing frames
  • Processing an image or pre-recorded video file, instead of a live capture
  • Accessing the results (of Person(s)) after a frame is processed
  • Exeception handing if something goes wrong

The 3 modes of operation are:

  • Live
  • Video
  • Image

To run the application, do this:

  • In linux, open a terminal window, and run ./emotions video somevideo.mp4
  • In windows, open a command window, and run emotions -l license.xml -m video -file somevideo.mp4

Windows

To build, open the solution (.sln) to any tutorial inside of the tutorial folder. There exists only one, for now it's called emotions. This tutorial outputs the 6 emotions to the console. If in live mode, you will need to make facial expressions to see the scores rise and fall. You can also run the pre-built program, which is also included, without building anything.

Please note

The tutorials mentioned here are configured, by default, to use the project working directory to look for files the projects uses i.e. the license (license.xml) file. You can change this behavior by modifying the projects (in this case), working directory. To do this, access the property sheet of the tutorial you are using. Once in there, navigate to:

Under Project -> Properties -> Configuration Properties -> Debugging -> Working Directory

Change the working directory to where you would like the IDE to look for your license and other assets such as the models folder. Be careful however, as mentioned above, the tutorial points to the default working directory (the project folder), and if you change this, then you will need to also move your models, image video files, etc over to the new location or you will get errors in the tutorial.

To run the emotions tutorial:

  • Navigate to \tutorial\emotions in your Windows SDK installation
  • Open the emotions.sln file
  • Convert the emotions project to the updated IDE build tools when prompted. If using a more recent version of Visual Studio, you will be prompted to convert the emotions project (as the tutorial was built with VS2012) to your version.
  • Right-click on the project (solutions) in the projects pain of the IDE and choose properties from the popup menu
  • As mentioned in the above section Creating your first App, select C/C++ -> General and change the additional header paths to match where you installed your SDK.
  • As menu in the above section Creating your first App, select Linker -> General and change the additional library paths to match where you installed your SDK.
  • Retrieve copy of the models folder from inside of your installation and copy it to your the emotions project folder (this is where Visual Studio looks for its assets by default), located in \emotions\emotions.
  • Retrieve a copy of your license and place in the emotions project folder (this is where Windows looks for its assets by default), located in \emotions\emotions.
  • Retrieve a copy of your license and place in the emotions project folder (this is where Windows looks for its assets by default), located in \emotions\emotions.
  • Set the environment PATH under system variables, NOT user variables. Make sure you have admin rights to do this action. Set the PATH using the following links:
    • SDK-INSTALL-PATH\bin\cara\x64\vc11
    • SDK-INSTALL-PATH\bin\opencv
    • SDK-INSTALL-PATH\bin\boost
    • SDK-INSTALL-PATH\bin\libxml
  • Now that your are ready to compile, locate the debug toolbar, change the configuration to x64 and build type to Debug/Release.
  • From the build menu, select build or rebuild solution. Sometimes it is safer to do clean solution before building.
  • Click the Local Windows Debugger (ok to do for Release mode as well) button from the debug toolbar to launch the tutorial

Notes

  • If the tutorial does not compile or it crashes during run-time, please check the above section Creating your first App for more information about what to try if things go wrong
  • Please substitute SDK-INSTALL-PATH (mentioned above) with the actual path of where you installed your SDK after downloading it

Linux

As above, there is a tutorial folder. This does not have a pre-built executable but only a source file with all code included to demonstrate usage.

To compile, you will need to use the gcc (version 4.9) compiler and include the header paths and library references necessary for your program to compile successfully. Located in the bin folder, there are several libraries (libxml2, ssl, boost, and opencv) that you will need to tell gcc about. As mentioned previously, these libraries references are set in the LD_LIBRARY_PATH.

Quick notes about tutorials and or stand-alone apps

  • There is a comprehensive README file included inside the tutorial with details about running the included tutorial app. The info can also be applied to installing the SDK
  • To run the tutorial app, you must include your license file or a LicenseInvalidException is thrown
  • If running the tutorials or standalone apps from within the Visual Studio IDE, make sure your license and model assets are in the project root, not the solution root or the debug/release directories because the executable may not find your[license or models] and you will get exception errors because your data model (.dat file in the models folder) or license can't be located and your app or the included tutorial will not work until this issue is resolved. However, running any executable outside of the IDE will work as expected.
  • The Visual Studio solution is pre-configured and serves as a guide only. The SDK is 64-bit and the default configuration may already be set to 64-Bit, however, change it to 64-bit if not. Also, the static imports and include locations are there as well but you will need to change the paths to match where your SDK is installed
 

 

Building the tutorial app using Docker (Optional)

Docker is a tool that can be be used to create containers that have base images that include an OS and all of of the dev packages and others tools required to test small portable distributions. This guide is assuming you have Docker installed and configured. For more information about Docker, please check out the following link:

Docker.com

We have included a Dockerfile within the tutorial that will build the tutorial for you, from top to bottom. In the end, you will have a Docker container named tutorial/emotions that has, in it, a full OS (Ubuntu 14.04), boost, OpenCV libraries and other dependencies, an installed Human Analytics SDK and a fully built and functioning tutorial app. We’ve even included a video and image for you to test with if you don’t have any on-hand.

Generally, to build the Docker container you have to:

  • cd into /tutorial/emotions/server/src
  • in your terminal, to build the container, type make build
  • in your terminal, to run the container, type make run

To execute the tutorial do:

  • cd into /opt/kairos/HASDK/emotions
  • obtain a copy of your license.xml and drop it into the emotions directory
  • get an image or video (or use the ones provided, they are in the /media folder) and drop it into the emotions directory
  • In your terminal, type ./emotions video somevideo.mp4
 

 

What’s next?

How to get help

You can email This email address is being protected from spambots. You need JavaScript enabled to view it. to open a ticket for any challenges you may have during your development with the Kairos Human Analytics SDK.

Alternatively, contact This email address is being protected from spambots. You need JavaScript enabled to view it. to talk to us about pricing, licensing, reselling or long term support and maintenance agreements. We’d love to hear from you.

 

 

Miscellaneous

Open Source Credits

OpenCV

OpenCV is an open source SDK that we use for video and image capture, and other support structures. The only required dependencies are the run-time shared DLLs for both Windows and Linux. The architecture of the OpenCV runtime matches the architecture of Human Analytics SDK, which in this case is x86_64 for the 64 bit architecture. These runtime binaries are included with your installation as described above in this page.

The version that comes with the Human Analytics SDK is always a stable version of OpenCV that works well with the current version of the SDK. We’ve also included debug versions (Windows only) of the OpenCV binaries to make it easy to create debug versions of your application when building with the Human Analytics SDK.

http://opencv.org/license.html

Boost

Boost is a set of libraries for C++ that provide many common functions such as linear algebra, multithreading and image processing.

http://www.boost.org/LICENSE_1_0.txt

OpenSSL

OpenSSL is a library used to secure communications through cryptography.

https://www.openssl.org/source/license.html

libXML2

LibXML2 is a library used to parse XML formatted documents.

http://xmlsoft.org/FAQ.html

 

 

License and Terms

Our latest terms and conditions, as well as our privacy policies can always be found online at www.kairos.com/terms and www.kairos.com/privacy respectively.