3 reasons why lighting industry professionals can’t ignore Artificial Intelligence

05 Dec 16:00 by


You all heard about Artificial Intelligence (AI), Machine Learning (ML), and how it will change our world[1], or end it[2]

It's clearly a subject with a lot of attention lately.

Two years ago, since we are nerds, we started to have a look at it, and spent quite some time training for it. The question we had was "could it be applied to our field?"

To get up to speed, we partnered with specialists, Ai Services. In this post i will describe why we think AI is a major gamechanger even in our industry partnership was extremely useful. Thanks guys!

But first, what are we talking about?

AI, or Artificial Intelligence, became efficient thanks to a massive increase of computational power, and from removing dust from some exotic but old mathematical objects, called neural networks.

In the last year only, AIs have busted criminals, have read your email, have checked your pictures, have wrote articles, translated any language into another, beat you at chess, at Go, at League of Legends, at planting salads. It also drove your cars safely, invested your money, helped doctors find tumors, detected risky behaviour from facebook posts, and managed your powergrid. Search everything on Google, you will find many more.

What is an AI?

Disclaimer : i will simplify as i just want to introduce the point :

An AI is a function which performs tasks usually attributed to humans. A modern AI is based on one or multiple networks, which are trained on one or several task.

There are currently about twenty different network architectures, i'll show the most popular one, the standard neural network.

An example of network : input comes from the left, result is on the right. In the middle, each circle has a "weight" and each layer has a "bias". Those are numbers, which are, at start, random.


From there, for example i have on the left, pixels, on the right "is it a cat"(yes/no). This particular exercise is called a "classifier".

Now we have a random network, which is useless. We need to "train" it.

The training phase is based on an absolutely mesmerizing algorithm (actually a theorem) called back-propagation.

This back-propagation allows, from a test where you know the result, to adjust all those random numbers to "better" values. After a while, the network can predict the right number.

Compared to usual fitting algorithms, the performance is order of magnitudes higher.

So bring in tons of images of cats and non-cats[3 -spoiler: after a long debate we decided this is not a cat-], where you tell for each if it's a cat or not, and let the algorithm run, and let it adjust the network thanks to the back-propagation magic wand.

After a while, this "thing" is able to detect what is a cat and what is not, based on pixel inputs only

Fig 1 : Example of "is_cat==1"

What i described is called supervised learning, a sub-branch of the famous "Machine Learning"

And if your network has many layers, we can even add a coin term, it's "Deep Learning". (Now we have a full Bullshit-Bingo sheet filled, bravo)

So now my mathematical object can make sense of images. Pretty Neat eh?

How is it relevant for us?

Neural networks have a tremendous skill in finding "patterns". Imagine having 200 columns of data, where the column 12, 16 and 86 are loosely correlated but all together help you predict the end result.

The standard statistical methods have some issues with large multidimensional spaces, where AIs shine there.

Fig 2: Live Video: Bob V1.0 scanning through 120 dimensions for patterns

Simplifying to the extreme, we humans are pretty good to plot x vs y, and see if it correlates. An AI does this in n-dimensions. Try a plot in twenty-six dimensions, and call me when done.

So we built our models, and "hit the street".

To explain why we, lighting professionals, should have a look into AI, let me give you three examples of jobs we were involved in over the last 18 months, from the most "scientific" to two direct applications:

Job 1 : Ultrafine spectrum prediction

In this job, the goal was to be able, from a color point in a given system, to come back to "what is wrong with the system".

The usual way would be to model the system, run a monte carlo, and sort out the result in the desired range. This implies large, large, large amounts of monte carlo runs.

We did a simple monte carlo to "pave" the result around the desired area, and trained our model on this.

Then from the input being the color points, the AI delivers the initial parameters. 1M color points were defined inside the target, and we got 1M sets of parameters. It saved a couple weeks of calculations, and the trained model can predict in realtime

Which means this could be embedded in a microcontroller for live "spec deviation" prediction and ability to alert the user that "maintenance might be required in the near future" before the problem happens.

Job 2 : Cars EOL testbench load

In this job, a car manufacturer has 7 production lines, which run in parallel. There is close to a dozen thousand different models (as there are many options).

At the end of the line, there is a test bench, which is mandatory, which tests the entire car for validation.

It is extremely difficult to know in advance how long the test bench will be used per model, as it depends on many factors (individual part reliability, tolerances, inline measurement result, etc)

From the partial test results done in the line, the question was : can you arrange the test bench load?

Answer is yes, we could reach a couple dozen seconds accuracy in load prediction for 30 mins tests. Improvement could not be measured, as it was not a solved problem before.

Job 3: Luminaire Reliability predictions

Take the car EOL test, apply it to a luminaire factory.

This is a set of products and their characteristics, can you make sense of those quality results?

Answer is yes, we could pinpoint the products which were problematic, and which characteristics were troublesome.

Is it magic?

Contrary to what we read today (September 2017), it's not.

ML/AI is for now "glorified interpolation". It's fantastic in its scope, and tends to go completely offtrack outside it's scope. We already have papers who mix up criminality detector and smile detector [4]

Now, with a N-dimension scope, that's still quite a large scope. But it's like with a plane or a car, make sure you know how it works before using, or it'll bring you to places, but it will be dangerous.

Isn't it a crowded market already?

Yes and no.

There are huge actors on the AI, but they are following a logical "platform" approach. There is no time for them to spend on your dataset, so they will use an arms-race approach : More datacenters, more GPUs. (check point 1 below). Oh and you will pay for the datacenter/GPU lease. It's virtual real-estate :)

We decided on a different approach : our background in industry and stats helped us define a couple key must-haves in our offer

Which brings me to our differentiators there :

1. Pre-work

Being what we are (nerds), we still use our brains. We spend some time on the dataset itself. We use some of the modern statistic tools, to clean up and reduce the dataset. This quickly gives order of magnitudes of treatment speed once you start the learning.

2. SmartData

AI rely on data. If we talk about production data, we know how sensitive it is

We developed something we called "SmartData". This allows the customer to encrypt his data a certain way, and we will still find patterns.

You encrypt your data, you keep your keys, we do the predictions on the encrypted data, we provide an encrypted result, which you decrypt with your keys.

So no risk for data leakage.

3. Model simplification

Our models, once trained, are passed through a series of algorithm to simplify them. This allows, if there is a need for real time treatment, to output them even as a javascript function that run on a laptop. The tradeoff between prediction precision and speed is discussed on each case.

Impact on our industry

The impact on our industry, and actually on all of them is mostly for the Quality Managers.

For quality control, quality prediction, reliability prediction, process control, AI is a breakthrough. The more complicated the process, the more steps, the better AI will perform compared to the standard methods.

The second impact is on product architects. For them, defining "process windows" is critical, and there, those tools and methods will become mandatory


In conclusion, AI and machine learning are a new tool, not something "magic", But in its specific scope, which is quite large, it will quickly be unforgivable not to dive into it.

And it's a lot of fun