Quarry & Mining • April 12, 2026
Quarry Load Verification AI: How We Built It on Cameras That Were Already There
Quarries lose money to inaccurate load counts. Not through fraud necessarily, just through the inherent inaccuracy of manual tally systems, shift handover gaps, and the practical impossibility of having someone watching every loader cycle and truck departure. Here is how we solved it without replacing a single camera.
By the Industrial AI Team • 7 min read
The Problem We Were Solving
The brief was straightforward on paper: the site needed a reliable count of truck loadings per shift, tied to time and to the specific loader involved. Manual tallying had worked for years but was producing numbers that did not reconcile cleanly with ticket weights at the weighbridge, and nobody could easily reconstruct what had happened on a specific shift when a discrepancy came up.
The site already had cameras. They had been installed for general security monitoring over the years and were positioned in reasonable locations for operational visibility. The question was whether the existing camera angles and image quality were usable for computer vision, and whether a system could be built that would run reliably in the operational environment.
Camera Assessment: What We Found
Before writing a line of code or starting any model training, we spent time reviewing the existing camera feeds against what we needed to detect. This step is often underestimated. The feasibility of a computer vision project depends heavily on whether the cameras can produce usable images for the specific detection task.
What we found on this site was a mix. Some cameras had good positional coverage of the loading area but were older and produced lower-resolution images, particularly in low-light conditions at shift changeovers. Others had better image quality but were positioned at angles that made truck differentiation difficult — too side-on, or with obstructions that regularly came into frame.
Two cameras, however, had genuinely good sight lines over the main loading zone. One covered the loader's primary work area with a clear view of bucket cycles. The second covered the truck approach and departure lane. Together, they gave us the coverage we needed without any new hardware.
For the remaining gaps, we recommended two camera repositions rather than new installations. Moving an existing camera ten meters is a different cost and disruption level than a new installation with new cabling and mounting infrastructure.
What We Needed to Detect
The detection task had three components:
- Truck arrival and departure — classify the truck type, record entry and exit time, tie each truck to the loading event that followed
- Loader bucket cycle count — detect each scoop cycle for the primary loader and determine whether it resulted in material being deposited into a truck
- Loading completion signal — detect when a truck was fully loaded and began moving toward the exit, distinguishing this from partial loading or loader passes that did not result in a deposit
None of these are tasks that a generic object detection model can handle reliably. A generic model can detect that a truck exists. It cannot tell you what loading state the truck is in, whether the bucket deposited material or swung past empty, or when loading is complete versus in-progress.
Building the Training Dataset
We captured training footage during a structured session with the site's operations team. The session was designed to generate examples of every detection state we needed: truck arrivals and departures with the site's actual vehicle mix, loader cycles at different points in the loading sequence, partial loads, aborted loading sequences, and various lighting conditions including shift changeover periods when light levels change rapidly.
Annotation was done collaboratively with the site's operations supervisor. This is an important distinction from annotating with an outside team who has no domain knowledge. The supervisor's input on class definitions saved multiple retraining cycles. Specifically, how the site defined a "complete loading cycle" versus an "incomplete pass" was different from what an outside annotator would have guessed from the visual alone.
The final training dataset was not large by AI research standards. For the truck detection and classification task, a few hundred labelled examples per truck class was sufficient. For the loader cycle detection task, which required more nuanced temporal understanding, we needed more examples across the variety of cycle types and lighting conditions.
Model Architecture and Edge Deployment
The site had intermittent connectivity. Internet was available but not guaranteed to be reliable enough for a cloud-dependent inference pipeline. This meant the detection needed to run on-site, which influenced the model architecture choice.
We used a YOLOv8 family model fine-tuned on the site-specific training data, deployed on an edge device installed in the site's control room. The device runs continuously, processing feeds from the two primary cameras in real time. Detection events are logged locally with timestamps and stored for the shift reporting window.
A lightweight dashboard running locally shows the current shift count with per-truck and per-loader breakdowns. A shift summary report is generated automatically at the end of each scheduled shift and distributed to operations and management via email. When connectivity is available, a copy is also pushed to a cloud-hosted record for longer-term trend analysis.
Performance in Production
The system went through a validation period running alongside the existing manual tally before replacing it. During validation, the automated count ran in parallel with manual recording so the two could be compared in real time.
Initial accuracy on truck detection and classification was high from the first deployment, because this was visually the most straightforward task. Loader cycle detection required one iteration of retraining after the validation period revealed a specific edge case — a loader movement pattern that occurred during engine warm-up that was being classified as a loading cycle.
After that iteration, the system has run through hundreds of shifts with strong agreement between the automated count and the weighbridge-based reconciliation. Discrepancies that still exist are traced to cases where the weighbridge tickets contain manual entry errors, not to issues with the vision system count.
What Made This Possible Without New Cameras
Several factors made the existing camera approach viable on this site:
- The two best cameras had adequate resolution and frame rate for the detection tasks, even if other cameras on-site did not
- The loading zone was sufficiently enclosed that camera angles were not compromised by activity outside the detection area
- Custom training meant the model learned to work with the image quality and characteristics of these specific cameras rather than requiring generic high-spec input
- The detection tasks were sufficiently well-defined that we did not need 360-degree coverage, just reliable coverage of the specific events that mattered
Not every site will have existing cameras in positions that support the detection task. Camera assessment is always the first step, not an assumption. But in many cases, the cameras already installed are a lot more useful than site managers initially expect.
Lessons That Transfer to Other Sites
The most transferable lesson from this project is that the camera assessment phase is non-negotiable. It is the difference between a project that delivers results from existing infrastructure and one that requires a capital spend on new hardware before anything useful can happen.
The second transferable lesson is that domain knowledge in annotation pays disproportionate dividends. Thirty minutes with the site supervisor clarifying class definitions saved us at least one full retraining cycle and several weeks of production validation time.
The third is that edge deployment needs to be in scope from the start, not retrofitted. Connectivity assumptions that work during testing can fail in production on remote sites, and the architecture decisions made early are expensive to change later.
Related reading
Running a quarry or mine site?
Start with a free camera assessment. We will review your existing infrastructure and tell you honestly what is usable and what the system can realistically deliver before any hardware or training spend is committed.
Get a Free Camera Assessment