Growers Network was created as a resource for adults in the cannabis industry.

Please verify your age to enter.

The next revolution in growing

As this industry matures and becomes even more competitive what will be the next revolution that will give growers the edge, will it be lights? nutrients? I say no, the next revolution in growing is going to be Data and for some that revolution has started already.

This is data in volumes far beyond what you could be manually noting down on a daily basis (a good place to start), data at a very granular level and the ability to compare the data against previous grows. With this comes the need for more advanced computing analytics and platforms to help make sense of the data and identify correlations and other data points of interest. This is something that used to be in the realm of the big players in the game, the cost of entry for the smaller players has made it prohibitive till now.

With cloud computing providing access to masses of computing power and a very easy entry point to creating your own solution this is an exciting time. Individuals and companies are able to start tapping into the machine learning abilities offered by many of these service providers and build out their own data analysis platform, allowing them to make better informed decisions to improve on their ROI.

The data and what can be derived from it will become as valuable, if not more, as the product itself. You’ll look to purchase clones/seeds etc… and the data package to go with it.

As stated, this revolution has started already for some and the cost of entry is now at the point where everyone can start to get involved.



Hi Peter,

You are absolutly right about the data driven horticulture approach. So whats possible at the moment:

  • It is possible to collect environmental data like CO2 level, temperatur, RH, composition of water, etc. by using your system or a system like our HashCropter

  • It is possible to control lighting, HVAC, etc. based on those sensor data. But normally the dependency has to implemented by human beings based on experience!

  • It is much harder to collect data about the health/condition and reaction of the plants. Sensor technology is quite expensive if you for example think about Hyper Spectral Imaging Technology, etc. At the moment the @mastergrowers are still the best sensor :wink:

  • But the hardest part still is to create algorithms to dynamically automate the cultivation process based on sensor data. - And that’s where AI comes into play.

We have been working with AI for almost two years now. The problem is still to collect enought/all relevant data and to create valid models since the processes inside the plant are still a black box…




I couldn’t agree more. I have only seen a handful of grow operations (mostly indoor or mixed light greenhouses) truly embrace the data side of the business. It’s like any other business: leverage the analysis to optimize your operation. I’ve even seen one grower measure the efficiency of his individual trimmers to create more efficient seasonal work flows with his staff - it was more akin to Six Sigma which is almost unheard of in the industry.

One main challenge I’ve seen is how you get different sets of data to work with one another. There are seemingly disparate sets of data (lighting, CO2, temp, humidity, etc.) that can be difficult to integrate to provide a holistic look at how everything is working together to optimize a grow. What systems/platforms/methods do you use to help the data “talk” to each other so the different data sets work together? Or are you still manually integrating data sets?

I’ve come across some different technology providers that have some interesting solutions:


Hi Alex,

thats a big problem indeed. Its pretty much like in IoT too many platforms are out there. So open source or open api’s is our answer for the future to solve that problem.




Great points raised, and the nail being hit on the head a good few times! The key is to make the data available to other systems and for products to be able to integrate with each other and share the data.

We’ve taken the approach with our system to allow for others to leverage our infrastructure for either retrieving sensor data and using in their own downstream applications/services or for connecting their devices and publishing data. @cschubert as you rightly point out APIs are the way forward, what would be good would be agreeing on a standard to use for the message structure.

I see a lot of the challenges that the financial market data world has faced and solved, not to mention if they could do it again they would do it differently. The catch is that at some point when the data becomes valuable and a commodity the “wonderful” world of data governance, licensing, audits and so the list goes. This industry does need to take a look at what lessons can be learned from others which are have become data centric and the challenges faced.

From the company perspective we focus on sensor collected data and responses to this data, either as alerts or control signals. We don’t specialise in the visual data side of things, however we know companies that do and their focus is not on the sensor data collection. We both have features/data sets the others would like to incorporate or have access to but don’t have the resources or time to implement them. Our current infrastructure is being designed to make it possible for these services and data to be shared and to make it possible to more easily mix and match products and services into the desired solution.

As more systems start to share the data between them we start to reach the critical mass point where there is enough data to start getting really meaningful information out of an AI/ML system. By sharing data you could go from collecting data from 100 sensors to pulling down data from 100,000s of them, thus accelerating the learning capability of the systems. The catch of course is two fold, 1) there needs to be willingness to partner up and share data 2) the data needs to be standardised and structured. (number two is the real kicker, get this right early on and it makes a world of difference).

Taking a look at setting up a data standard would be an interesting challenge and one we’d be up for taking on along with any others from the community who would be interested.

1 Like

Still, a Master Grower with experience with a particular cut and a notebook will suffice.


Yes you are absolutly right, but imagine a master grower with experience with a particular cut and a digital notebook which provides all relevant informations including live data like

  • the development of cannabinoids in relation to
  • environmental parameters
  • state of health of the plant
  • etc.

So hopfully technology can help mastergrower to gain first, second, third… dan




“The next revolution…” Yes, I’m certain that that’s what Doc Holliday was thinking when he drew drunk and shot himself in the foot. But he was famously an overcompensating depressive. Christoph is wise to refer back to the internal processes in the plant, since they, not we, are in charge, and all we can do is mess them up (unless we are lucky) with all our projections, instrumentation, neural networks, algorithms, and data.

  The next "revolution" that will give growers the edge is *systems*
  efficiency. But we knew that already, didn't we?

Hi Alexander,

you are absolutly right. But to gain understanding of those processes will hopfully be possible. The problem will be: What do do with those understanding?

In the end I also would not like to get my cannabis from fully automated systems which treat plants like maschines.

But hopfully there will be some master growers who will wisly adopt such technology in order to improve the condition for the plant and as a result also the quality of the final product for consumers/patients.




I think you both make valid points: the growing revolution will be the synthesis of master grower and an overall embrace of technology to achieve optimal growth of the best possible cannabis. One cannot exist without the other. The tech will make the data logging and automation easier as we move forward into the future. I can still do math in my head (that may be a bit of a challenge), on paper, with an abacus or with a calculator. At the end of the day, I can allow the computer to due the bulk of the calculations while I analyze the data, interpret the meaning, and apply the results.


Here is what I wrote about automation in my ** We Serve the Soil** document on bio-dynamic Cannabis horticulture:

An Aside about Automation and Remote Monitoring

          On the surface, these seem to be good things, but in practice they result in a lack of critical attention being paid to operations that are automated, and the missing of early cues to trigger preventive or corrective actions, with consequent arrears and catch-up — or not — when contagions strike or conditions deteriorate. Remote monitoring and adjustments lead to fetishistic busy-work changes when the plants need constant, stable rhythms. Most remote monitoring is incapable of catching critical early changes in the health of the plants or subtle developmental cues indicating natural readiness to flower.

          Semi-automation, including fail-safe controls and alerts on gross parameters such as power outages, temperature, and humidity, is a good middle-ground. Cannabis cultivation is very skilled-labor intensive, and the best balance is to fully automate only background functions, semi-automate most low-skill operations, and keep entirely manual and real-time high-skill creative and critical maintenance and diagnostic functions. Full automation can sow over-confidence, can distance growers from their plants, and can be a disastrously false economy (and isn’t inexpensive or robustly reliable, either).
  This presupposes that we are seeking to produce the very best weed and not production-line schlock. The whole **We Serve the S****oil**
  doc can be downloaded from:


  You might find it of interest...

Hi Alexander,

Many thanks for the document! Also your statement is absolutly true. Its pretty much the same with all the AI based prediction systems like predictive policing, etc. It simply doesn’t work yet!

But it is also true that there are technical means to gather much more informations than most of the cultivators are able to. From our point of view the focus should be rather on collecting data in order to understand more about the plants than using such a technology in order to automate and/or “save” labor. At the moment those technology will lead to much more labor since the collected data has to be interpreted, etc.

Have you ever experimented with tunable spectrum? I would love to setup a test. We do have a 24 channel LED fixture which alows us to simulate the sun quite accurately from 350 -1000 nm (could be used as a reference) and I would like to run a test against some optimized but also tunable LED setup but all based on your holistic approach…




We have done the beginnings of such a tunable-LED experiment, working with Ed Stoneham at XtremeLUX. With some small guidance from me, his outfit has developed programming to vary the light on three independent axes: spectrum, duration, and intensity. That’s the new standard for LEDs (as is abandoning the crippling “fixture” legacy from the HID days, and providing instead a quasi-planar light-source that solves many problems in an indoor or light-dep grow). Contact him at or at [email protected].

  Most LED makers are totally missing the boat on these advances, yet another sorry legacy of the hydroponics mentality and "tradition" (or more accurately, bad or at least ignorant habits).

  And yes, we could be gathering much more data, but without a parallel advance in systems understanding, it would be little more than noise. Context is (almost) all. There is a huge upside potential for scientific advance, were we to get the requisite funding, either from the gov or from within the industry.

The skill of the master grower will not only be in their knowledge and experience of the plants themselves but also the ability to interpret the data.

I think more along the lines of automation and data being there to assist and enhance not to replace the knowledge and experience. You still need to know the basics and get your hands dirty, and you need to know how to use the tools and equipment at your disposal.

With automation handling more mundane tasks at present and as technology advances it will be able to handle more complex activities, I see the experienced and master growers having particular interest in the data. The more data you have access to the better decisions you can make, what decision and course of action needs to be taken is where experience kicks in. A scalpel in the hands of a surgeon is a precision instrument, in the hands of someone who’s read the book and watched the video it is a danger. The same can be said for an irrigation/pump scheduler.

If there is one thing I’ve learnt over the years about the technology industry is just how fast it moves and it keeps on accelerating. Right now we might all agree that a Master grower is likely to out perform any automation system, in 10 years we may no longer agree.


A forum and collective such as the Growers Network might be a good place to start when it comes to building an understanding of the context. If we agreed on an initial data set along with the associated meta data and had every grower on this forum able to submit it, ideally automated gathering and submission, we’d be able to create a data pool/lake for people to work with help build that system understanding.

We are up for the challenge :slight_smile:


We are in too :wink:


I am basically in agreement, with, however, a giant “yebbit:” It is all too easy to underestimate the infrastructure and just plain work it takes to perform research like you describe. Without some rigorous standards of uniformity and diligent follow-through, the best we can do is generate an impressionistic and anecdotal sense of what we will have shown. Now, that is enough for some purposes, and in fact we here have done some loosey-goosey “research” along those lines, and found that, for instance, “miracle products” with “secret ingredients” are noticeably effective in inverse relationship with the quality of the soil being grown in. In other words, when we use one such product with a fully biodynamic soil, it produces nearly-un- or not-noticeable improvements (unless it kills off the microbiome, which is all too common, in which case it produces negative results). When we use it with lousy soil, it produces somewhat noticeable improvements, or none at all. There are too many variables between growers and too many growers too busy or aren’t competent to rigorously maintain separate controls, for this kind of informal experimentation to have any meaning beyond the impressionistic (which and still tell us what we need to know, which is usually that all we really need to do is use optimized soil and take it easy on the busywork).

  Maybe when there is a critical mass of automated systems to make uniformity and control-separation more likely, such distributed experimentation can be trusted to be credible and reliable. But then, getting all those growers to use the same protocols will be like herding cats.

Could not agree more, and this is the exact issue I faced in my previous career in market data. Lots of different sources of data to deal with, just so happens after some creative thinking we came up with an interface abstraction layer that neatly takes care of that problem. So data can be retrieved from a multitude of sources, normalised and mapped to a canonical model if needed, without the need to change the format of the source data. For a large data gathering project to work is has to be as simple as possible to capture and submit the data.