Frequently Asked Questions
1. Can Smart Engines products be integrated into any solution?
Yes. Our products can be integrated into any solution. Different implementation variants are available, depending on the client's infrastructure.
2. How can I install and test the system?
You can download the mobile demo applications in Apple Store and Google Play by links:
in Google Play: https://play.google.com/store/apps/details?id=com.smartengines.se
in Apple Store: https://apps.apple.com/ru/app/smart-engines/id1593408182
To get the SDK for your operating system, contact us by email sales@smartengines.com or support@smartengines.com, and we shall prepare the appropriate SDK for testing.
3. Where can I find the description of your SDK?
All documentation for integration and use, including detailed instructions, is collected in the SDK in the folder /doc
.
4. How can I test your library?
To test our library, install and run any of the integration examples in the /samples
folder.
5. I have an SDK version for the Windows OS, but I need to work with Linux. How can I run your system in Linux?
We provide different SDKs for different platforms that are most suitable for the required platform. Connect us via email sales@smartengines.ru or support@smartengines.ru, and we will supply you the appropriate SDK.
6. I have an SDK version for Centos 7 OS. When I try to run it in Ubuntu/Debian/Alpine, "undefined symbol" errors appear. Why?
SDKs for different families of Linux kernel-based OS distributives may differ from each other. This SDK version is not compatible with your OS kernel. Connect us by email sales@smartengines.ru or support@smartengines.ru, and we will supply you the appropriate SDK.
7. In the mobile examples inside the SDK, when the scanner window opens, the button for starting scanning appears. Is it possible to make the recognition of objects in the frame start automatically when pointing the camera, without the need to press a button?
You can hide this button. But we do not recommend doing this, for two reasons:
- The user clicks/taps the button after making sure that the document is in focus and in the desired position. This allows you to speed up the recognition time, because at the beginning of recognition we do not submit “junk” frames for analysis;
- When the camera screen is opened, our library can be initialized in parallel. The button typically becomes active, indicating the system is initialized and ready.
8. How can I use the SDK if I need to send recognition requests from different devices? Are there cases for recognition from multiple devices?
If you need to recognize documents from different devices, there are two ways of doing it:
- Recognize the documents directly on client devices using the mobile SDK or the WASM-enabled web SDK. In the case of the web SDK, use the device browsers. This requires the integration of our SDK into your web infrastructure. Document recognition on the client device allows you to get multiple images from the device's camera in one recognition session. This gives better results in constantly changing shooting conditions (glare, shadows, blurs, etc.).
- Send images to the server. In this case, you should use the server SDK.
9. What image formats are supported?
The following formats are supported:
- JPEG;
- PNG;
- TIFF;
- Image buffers in RGB;
- single-channel/halftone base64, both as a buffer and as a file.
10. What to do if I need to recognize an image in PDF format?
Since PDF is a container format that supports pages with data in the form of raster and vector layers, we recommend to extract and convert pages from PDF to raster format before recognizing them.
11. What to do if I need to recognize an image in HEIC format?
Support for the HEIC format in the mobile SDK is no different from support of other image formats: HEIC is read using system tools.
In the server SDK, you should handle the HEIC format yourself using third-party means and convert it either to one of the formats we support, or transfer the raw pixels directly as an RGB buffer.
12. How can we determine which image format we have before starting recognition?
If you don't know what image format you have at the input, you can focus on its mime type.
If you are using the mobile SDK, you can use the built-in tools of the operating system, which will always return images in the Bitmap
format in Android and UIImage in iOS.
Creating an instance of the se.common.image
class from Bitmap
and UIImage
is shown in our samples.
13. How can I know that recognition was performed correctly and that I can work with the result?
To assess the system's confidence in recognizing each field, we suggest relying on two parameters.
Confidence is a heuristic assessment, meaning the “confidence” of the system in the result.
This is a characteristic inherent in almost all the tools in our library: a template - it shows how confident the system is that the corresponding element is in this area; the fields of the document in the image, and so on.
Confidence has a value from 0.0 to 1.0.
isAccepted can have the true/false values. It is set individually based on the confidence thresholds for each field. If the confidentiality is not lower than the set threshold value, the value of the isAccepted' flag is set to
true`.
14. Is it possible to measure the recognition time?
Yes. Recognition speed metrics have been added to the SDK example, but they are not part of the recognition library. They mean the difference between two timestamps. Recognition time is the operating time of the Process()
method.
15. Does each engine process one type of documents (one type — one engine) or can one engine contain several types of documents?
One engine can contain several types of documents. The types of documents and their processing scenarios are set in the configuration file (bundle) and determined individually. The list of document types included in the engine can be obtained from the library or found in the * file.json
inside the SDK in the /doc
folder.
16. Is it possible to recognize a document contained in the set if its type is not known in advance?
You can set a list of recognized documents.
To do this, specify the appropriate settings in the configuration file (bundle).
All supported documents contained in the bundle are sorted by engines. Each engine has a specific set of recognition tools. By setting the recognition mode and mask, you can define a set of recognized documents.
17. How can I update the library to the latest version?
It depends on the type of the SDK you are using.
Server-side SDK:
Replace the libraries in the folder /bin
, the bindings (in the folder /bindings
) and the bundle (the *.se
file in the folder /data-zip
). This all is contained in the delivered SDK.
In Android OS:
Unpack the SDK. Update your Android project in three steps:
- Binaries (the binary files): find the folder
jniLibs/
in your application and replace its contents with the contents of the folder/sample/app/src/main/jniLibs/
from the SDK. - Wrapper (the wrapper): find
lib/*.jar
in your application and replace it withsample/app/src/main/libs/*.jar
from the SDK. - Bundle (the configuration file): find the file
assets/data/*.se
in your application and replace it with the filesample/app/src/main/assets/data/*.se
from the SDK.
In iOS OS:
Replace the contents of the folders SESmartDoc
and SESmartDocCore
with the contents of the corresponding folder from the SDK.