Skip to content

Frequently Asked Questions

1. Can Smart Engines products be integrated into any solution?

Yes. Our products can be integrated into any solution. Different implementation variants are available, depending on the client's infrastructure.

2. How can I install and test the system?

You can download the mobile demo applications in Apple Store and Google Play by links:
in Google Play: https://play.google.com/store/apps/details?id=com.smartengines.se
in Apple Store: https://apps.apple.com/ru/app/smart-engines/id1593408182

To get the SDK for your operating system, contact us by email sales@smartengines.com or support@smartengines.com, and we shall prepare the appropriate SDK for testing.

3. Where can I find the description of your SDK?

All documentation for integration and use, including detailed instructions, is collected in the SDK in the folder /doc.

4. How can I test your library?

To test our library, install and run any of the integration examples in the /samples folder.

5. I have an SDK version for the Windows OS, but I need to work with Linux. How can I run your system in Linux?

We provide different SDKs for different platforms that are most suitable for the required platform. Connect us via email sales@smartengines.ru or support@smartengines.ru, and we will supply you the appropriate SDK.

6. I have an SDK version for Centos 7 OS. When I try to run it in Ubuntu/Debian/Alpine, "undefined symbol" errors appear. Why?

SDKs for different families of Linux kernel-based OS distributives may differ from each other. This SDK version is not compatible with your OS kernel. Connect us by email sales@smartengines.ru or support@smartengines.ru, and we will supply you the appropriate SDK.

7. In the mobile examples inside the SDK, when the scanner window opens, the button for starting scanning appears. Is it possible to make the recognition of objects in the frame start automatically when pointing the camera, without the need to press a button?

You can hide this button. But we do not recommend doing this, for two reasons:

  • The user clicks/taps the button after making sure that the document is in focus and in the desired position. This allows you to speed up the recognition time, because at the beginning of recognition we do not submit “junk” frames for analysis;
  • When the camera screen is opened, our library can be initialized in parallel. The button typically becomes active, indicating the system is initialized and ready.

8. How can I use the SDK if I need to send recognition requests from different devices? Are there cases for recognition from multiple devices?

If you need to recognize documents from different devices, there are two ways of doing it:

  1. Recognize the documents directly on client devices using the mobile SDK or the WASM-enabled web SDK. In the case of the web SDK, use the device browsers. This requires the integration of our SDK into your web infrastructure. Document recognition on the client device allows you to get multiple images from the device's camera in one recognition session. This gives better results in constantly changing shooting conditions (glare, shadows, blurs, etc.).
  2. Send images to the server. In this case, you should use the server SDK.

9. What image formats are supported?

The following formats are supported:

  • JPEG;
  • PNG;
  • TIFF;
  • Image buffers in RGB;
  • single-channel/halftone base64, both as a buffer and as a file.

10. What to do if I need to recognize an image in PDF format?

Since PDF is a container format that supports pages with data in the form of raster and vector layers, we recommend to extract and convert pages from PDF to raster format before recognizing them.

11. What to do if I need to recognize an image in HEIC format?

Support for the HEIC format in the mobile SDK is no different from support of other image formats: HEIC is read using system tools.
In the server SDK, you should handle the HEIC format yourself using third-party means and convert it either to one of the formats we support, or transfer the raw pixels directly as an RGB buffer.

12. How can we determine which image format we have before starting recognition?

If you don't know what image format you have at the input, you can focus on its mime type.
If you are using the mobile SDK, you can use the built-in tools of the operating system, which will always return images in the Bitmap format in Android and UIImage in iOS.
Creating an instance of the se.common.image class from Bitmap and UIImage is shown in our samples.

13. How can I know that recognition was performed correctly and that I can work with the result?

To assess the system's confidence in recognizing each field, we suggest relying on two parameters.
Confidence is a heuristic assessment, meaning the “confidence” of the system in the result.
This is a characteristic inherent in almost all the tools in our library: a template - it shows how confident the system is that the corresponding element is in this area; the fields of the document in the image, and so on.
Confidence has a value from 0.0 to 1.0.
isAccepted can have the true/false values. It is set individually based on the confidence thresholds for each field. If the confidentiality is not lower than the set threshold value, the value of the isAccepted' flag is set to true`.

14. Is it possible to measure the recognition time?

Yes. Recognition speed metrics have been added to the SDK example, but they are not part of the recognition library. They mean the difference between two timestamps. Recognition time is the operating time of the Process() method.

15. Is it possible to recognize multiple phone numbers at the same time?

Yes. Tou can set the maximum quantity of recognized phone numbers in the result using the session option code_text_line.maxAllowedObjects - the system will search for the number of objects specified here.

16. In what encoding is the qr code recognition result?

The encoding is selected and set by the system automatically.
Two-dimensional barcodes allow you to store any information: text, images, etc. To interpret it correctly, you need to know in advance what is expected in the output. Therefore, we always return the "raw" information from such codes in the form of base64 and hex strings.

In the Russian Federation, there is a standard for SBP bank transfers, one of the variants of which is to place bank details in the form of a string encrypted in a two—dimensional barcode of one of the symbols: QR, Datamatrix and `Aztec'. The strings themselves, according to the standard (GOST R 56042-2014: Two-dimensional barcode symbols for making payments to individuals), can be in one of three encodings - CP1251, UTF8, KOI8-R. The encoding type is defined inside the string by a special header.
However, not all companies that create their own payment barcodes follow this standard. Errors can occur both with the indication of the header and when using an encoding that goes beyond those listed in the standard. Therefore, for the standard of Russian payment barcodes, we have created the automatic encoding detection functionality.

17. How can I update the library to the latest version?

It depends on the type of the SDK you are using.

Server-side SDK:
Replace the libraries in the folder /bin, the bindings (in the folder /bindings) and the bundle (the *.se file in the folder /data-zip). This all is contained in the delivered SDK.

In Android OS:
Unpack the SDK. Update your Android project in three steps:

  1. Binaries (the binary files): find the folder jniLibs/ in your application and replace its contents with the contents of the folder /sample/app/src/main/jniLibs/ from the SDK.
  2. Wrapper (the wrapper): find lib/*.jar in your application and replace it with sample/app/src/main/libs/*.jar from the SDK.
  3. Bundle (the configuration file): find the file assets/data/*.se in your application and replace it with the file sample/app/src/main/assets/data/*.se from the SDK.

In iOS OS:
Replace the contents of the folders SESmartCode and SESmartCodeCore with the contents of the corresponding folder from the SDK.