Every time you go on shopping sprees at a supermarket, the long queues at checkouts may cause you to become upset.
Food vector created by macrovector - www.freepik.com
But now, thanks to technology development, the emergence of self-service cashiers in supermarkets brings us a lot of conveniences.
Can we make a self-service checkout with HuskyLens? What do we need to identify each and every product accurately and estimate the final bill?
Function Introduction:
This project makes use of the tag identification function of HuskyLens to calculate the total price and realize self-service checkout by recognizing certain tags in products.
Materials:
Micro:bit https://www.dfrobot.com/product-2125.html
IO Extender for micro:bit V2.0 https://www.dfrobot.com/product-1867.html
HUSKYLENS https://www.dfrobot.com/product-1922.html
Knowledge Field:
If we have a close-up view on the process of check-out, we will find whether it is a manual or self-service cashier, it’s all the same: scan the barcode of the product and calculate the bill. As the barcode of each product is different, we only need to find replacements for code scanners and barcodes to realize our project.
Barcodes-scanning devices → the tag recognition function of HuskyLens
Barcodes → AprilTag
1. What Is Tag Recognition?
Tag recognition technology refers to the technology of effective and standardized coding and recognition of items, which is the basics of informatization. With people's increasing awareness of health and safety, the food industry has increasingly higher requirements on the quality and safety of products (from raw materials, transportation, production, storage, traceability and management). Tag recognition also plays an important role in meeting the needs of companies for product tracking and tracing.
Tag recognition technology mainly includes barcode technology, IC card technology, radio-frequency recognition technology, optical symbol recognition technology, speech recognition technology, biometric recognition technology, remote sensing, robot intelligent perception and other technologies.
2. What is AprilTag?
AprilTags is a visual reference system from a UMich project team for AR, robotics and camera calibration. The tag acts as a barcode, storing a small amount of information (tag ID), while also providing a simple and accurate 6D (X, Y, Z, roll, pitch, yaw) pose estimation for the tag.
3. The Principle of HuskyLens ApriTag Recognition
AprilTag recognition mainly includes the following steps:
1. Edge Detection: Look for the edge in the image
2. Quadrangle Detection: Look for the quadrangle within the area.
3. Decoding: Match the found quadrangle and check it.
Through these steps, the tag recognition function of HuskyLens can recognize different AprilTags. So, we just need to put different AprilTags on different commodities for recognition.
4. HuskyLens Sensor - Tag Recognition Demonstration
Step 1: Tag Detecting
When HuskyLens detects a tag, the tag will be automatically selected by the white frame on the screen.
Step 2: Tag Learning
Point the "+" symbol at the tag, long or short press the "Learning button" to complete the first tag learning. After releasing the "Learning button", the screen will display: "Press the button again to continue! Press other buttons to end. " To continue learning the next tag, press the "Learning button" before the countdown ends. If you no longer need to learn other tags, press the "Function button" before the countdown ends, or do not operate and wait for the end of the countdown.
In this project, we are going to learn the next tag continuously. Therefore, we need to short press the “Learning Button” before the countdown ends. Then, point the “+” at the second tag, pressing or holding the “Learning Button” to complete this learning. And so on.
The tag ID is consistent with the sequence of inputting, that is, the learned tags will be labeled as "tag: ID1", "tag: ID2", "tag: ID3", and so on, and the frame color corresponding to different tags is also different.
Step 3: Recognize Tags
When the HuskyLens encountered the learned tags, a colored frame with an ID will be automatically displayed on the screen. The size of the frame changes along with the size of the QR code tag and the frame will automatically track these QR code tags.
Project Practicing:
We will finish the project in two tasks. Firstly, use the barcode recognition function of HuskyLens to learn and recognize three products with barcodes, and then display the price on the HuskyLens screen. Secondly, based on the previous step, add the start and end of the scanning code events to perform quantity statistics of customer’s products. At last, add the total price settlement function to realize the supermarket self-service checkout function.
Task 1: Recognize Products
Let HuskyLens learn and recognize the tags affixed to three different products, and write a program to make the micro:bit dot-matrix scroll to display their corresponding names.
Task 2: Start and End Scanning Codes
Once the customer presses button A in Micro:bit, the dot matrix will scroll through the product names recognized by HuskyLens. If press button B, the dot matrix will no longer display any products scanned. When the next customer presses button A again, the scanning will restart.
Task 3: Price Settlement
Based on task 2, when any learned product is recognized, its price will be added to the total price. After the B button being pressed to end scanning, the dot matrix will show the total price of all products in this process.
Task 1: Recognize Commodities
1. Hardware Connection
2. Program Design
Step 1. Learning and Recognize
Provided there are only three products in the supermarket: cup, pastry, and utility knife, which all come with a unique tag.
HuskyLens can recognize these 3 tags. In the "multi-learning" mode, it will complete the learning in the order of cup, pastry, and utility knife to get ID1, ID2, and ID3. If the same tag is recognized next time, HuskyLens will feedback the corresponding ID. In this case, we can use the selection structure to make the micro:bit matrix screen display the product name corresponding to the ID.
Step 2. Mind+ Software Settings
Open Mind+ (version 1.62 or above), switch to "Offline", click "Extension", click “micro:bit” under “Board”, click “HuskyLens AI Camera” under “Sensor”.
Step 3. Command Learning
Here are the instructions mainly used.
① Initialize only once between the beginning of the main program and looping executions. You can select I2C or Soft-serial, and no need to change I2C address. Please note that the “Output protocol”of your HuskyLens sensor should be set to be consistent with the program, otherwise, data cannot be read.
② You can switch to other algorithms freely, but please note that you can run only one algorithm at each time, and it takes some time to switch algorithms.
③ The main controller requests HuskyLens to store data in the “Result” once(stored in the memory variable of the main board, and once requested, it will refresh the data in the memory once), then the data can be obtained from the “Result”. The latest data can be got from the “Result”only when this module is called.
④ Check whether there is frame or arrow in the screen from the requested “Result”, including the learned(id>0) and unlearned, and returns 1 if there is one or more.
⑤ Check whether the IDx has been learned from the requested “Result”.
⑥ Check if the IDx requested from the “Result”is in the screen. The frame refers to the algorithm of the frame on screen, arrow refers to the algorithm of the arrow on screen. Select arrow when the current is only line-tracking algorithm, for others, choose frame.
Step 4. Flowchart Analysis
3. Sample Program
4. Operating Effect
When the cup, pastry, and utility knife are recognized, the dot matrix will display "cup", "pastry" and "knife" scrolling into view. When there is no tag or other tags are recognized, the screen will always be off.
Task 2: Start and End Scanning Codes
1. Hardware Connection
The same as task 1.
2. Program Design
We need to add events to let customers known when to start scanning products and when to end. Suppose the customer presses button A on the micro:bit to start scanning. Once each product information is scrolled, the next product can be scanned. In this case, the process is a loop. For jumping out of this loop, just press button B. At this time, the next customer only needs to press button A again to scan his products.
The following is the logic diagram based on the analysis above:
3. The Sample Program
4. Operating Effect
Before pressing button A, the micro:bit matrix will not display the product name recognized by scanning codes.
Between pressing button A and button B, the micro:bit matrix will display the product names recognized by scanning codes in turn.
After pressing button B, the micro:bit matrix will not display the product name recognized by scanning codes.
Task 3: Product Settlement
1. Hardware Connection
The same as task 1.
2. Program Design
The total price will increase with the increasing scanned products. Therefore, the only thing we need to do is to add a variable based on task 2. Each time a customer presses button A, the previous total price will be cleared. When a product is recognized, the price corresponding to it will be added to the total price, and the total price will be displayed after pressing button B.
The following is the logic diagram based on the analysis above:
3. The Sample Program
4.Operating Effect
The micro:bit matrix will display the total price after pressing the B button based on the operating effect of task 2.
Project Summary:
Project Review:
This project mainly uses two-dimensional code tags to represent product information, which will be output with specific IDs by tag recognition. Thereby the dot matrix can successfully display the corresponding product information. Moreover, automatic settlement function is added, basically completing the supermarket self-service cashier.
Knowledge Review:
1. Recognize the importance of tag recognition and how to use it in life.
2. Learn to recognize tags and make judgments accordingly.
Project Development
After completing the self-service cashier, can we use the tag to represent the location of a house? In this way, the sweeping robot will be able to judge its position in time and make corresponding adjustments during the journey.