Estimates versus reality

Why your solar panels rarely produce what the brochure promised

Every solar installation begins with a number. Somewhere in the quote your installer hands you, in bold or italics or both, there is a line that says something like: "Expected annual production: 5,400 kWh." Sometimes it is dressed up as a range. Sometimes it is presented as a graph with reassuring confidence bands. Sometimes it is buried in a financial projection that turns it into euros saved over twenty-five years. But the number is always there, and it does most of the heavy lifting in the decision to sign.

A year later, the panels are on the roof, the data is flowing, and the number on the screen is rarely 5,400 kWh. It is 4,900. Or 5,200. Or, in a particularly sunny year, 5,800. Whether that gap is normal, worrying, or actually invisible because no one bothered to check, is the topic of this article.

This is a follow-on to the Reading your solar charts: what the numbers actually mean piece on chart reading. That article was about how to interpret the data your system produces. This one is about how to interpret the gap between the data and the promises you were given.

The three estimates

When someone tells you their system was "estimated to produce X kWh per year", it helps to know which estimate they mean. There are three in common use, and they tend to give different numbers for the same roof.

The installer's estimate is the one in your sales quote. It is produced by the installer's own software, often a simplified simulation tool, sometimes a spreadsheet with a few default assumptions about losses. It serves a specific commercial purpose, which is to make the financial case for the installation look good. This does not mean the number is wrong, but it means it sits at the optimistic end of what is plausible. Installers who lose deals because of pessimistic estimates do not stay in business long, so the equilibrium pressure on these numbers is to be cheerful.

The PVGIS estimate comes from the European Commission's free PVGIS tool, which we mentioned in What to consider before installing solar panels. PVGIS is a satellite-based model trained on actual ground-station data, run by the Joint Research Centre, and refined over more than a decade. You enter your address, your panel capacity, your orientation and tilt, and the tool produces a monthly and annual production estimate based on the historical solar irradiance at your exact location. PVGIS has no commercial interest in the outcome. It tends to be conservative, particularly on its default 14% "system losses" assumption, which assumes a healthy and well-installed system but does not assume best-case conditions.

The detailed simulation estimate comes from professional tools like Aurora, PVSOL, or HelioScope, which are used by serious installers for serious projects. These tools model shading hour by hour throughout the year, account for soiling and degradation, and let you set the inverter type, panel temperature coefficient, and DC-to-AC ratio explicitly. The output of a properly run PVSOL simulation is usually within 5% of reality for a well-characterised roof. The output of an unproperly run one is whatever the user wanted it to be.

For most residential installations, the installer's estimate is what you have at sale time and PVGIS is what you can check it against. If the two are within 10% of each other, the installer is being honest. If the installer's estimate is 20% above PVGIS for the same orientation and tilt, you should ask what assumptions are different. Often the answer is "an optimistic performance ratio", which is software jargon for "we assumed your system would lose less to heat, soiling, and other real-world losses than is realistic".

What real-world losses actually look like

The gap between the brochure and the meter is not random. It comes from specific, identifiable losses, and understanding them helps you read the gap correctly when you see it.

Panel temperature is the biggest factor people underestimate. Solar panels are rated at 25°C cell temperature. On a hot July afternoon, cells inside a black-framed panel on a roof in Antwerp can easily reach 60°C to 70°C. Each degree above 25°C costs roughly 0.4% of output for crystalline silicon panels. A 50°C panel is therefore producing 10% less than its label rating, even under perfect sunshine. This is also why a clear March day, when the cells stay cool, often outperforms a clear July day with much higher irradiance.

Soiling is the second underestimated loss. The clean roof in the photo on the installer's website is not the dusty, pollen-coated, occasionally bird-bombed roof you actually have. In a temperate climate like Belgium, soiling typically costs 2% to 5% of annual production, with most of the loss concentrated in dry summer weeks before the next significant rain. In drier climates the figure is higher. Cleaning helps, but cleaning is also a cost, and the maths rarely justifies professional cleaning for residential rooftops at current panel prices.

Shading is the most variable loss and the one most often missed entirely. A neighbour's tree that grew two metres taller since your installer's site visit can cost you 10% of your annual production without anyone noticing. A chimney that casts a shadow across one string for two hours each winter afternoon will not show up on a satellite image but will show up on your daily curves. Detailed simulations include shading. Quick installer estimates rarely do. The Benelux guide we cited above puts shading losses at 20% to 50% in problematic cases, which sounds extreme but matches what owners with poorly surveyed roofs eventually discover.

Inverter losses are 2% to 4% in modern equipment. The inverter has its own conversion efficiency curve, peaking around 97% to 98% in the middle of its operating range and dropping off at very low and very high power. This is not usually a problem unless the inverter is badly sized for the panels, in which case the losses can climb.

Wiring and mismatch losses account for another 1% to 3%. Long DC cable runs, mismatched panels in the same string, and minor differences in panel performance all contribute.

Module degradation removes 0.4% to 0.7% per year of output for crystalline silicon. After ten years, a panel produces around 95% of its original rated output. After twenty-five years, around 85% to 88%. This is a slow loss but a real one, and it accumulates over the lifetime of the system.

Add all of these up, and the gap between "what the panels could theoretically do" and "what your system actually delivers" is usually 20% to 30%, which is roughly what the PVGIS default 14% loss assumption captures on the optimistic side and what real-world long-term studies confirm.

The first-year disappointment, and why it is not what you think

Almost every solar owner in Belgium has the same experience in their first year. They watch the production numbers carefully through the summer, are pleased with the big numbers in May, June, and July, and then watch with growing concern as autumn arrives and the numbers collapse. By December, the daily production is sometimes a tenth of what it was in June. By February, when they total up the year so far, they are well behind the installer's annual estimate, and the panic begins.

This is almost always a misreading of seasonal distribution. Belgian solar production is highly concentrated in summer: roughly 60% of the annual total is produced between April and September, with the remaining 40% spread thinly across the other six months. December alone typically produces 2% to 4% of the annual total. The system is not broken in December. It is doing what it always does in December. The installer's quote did not lie to you, but it may not have shown you the monthly breakdown clearly enough.

The Belgian average of around 1,000 peak sun hours per year is concentrated in this lopsided way because of latitude. At 51°N, the sun is high in summer and low in winter, the days are long in summer and short in winter, and the cloud cover is roughly the same in both seasons. Multiply low sun, short days, and the same cloudiness, and December's production is unavoidable.

The diagnostic is simple: compare your monthly production to PVGIS's monthly estimate for your roof, not to your annual estimate divided by twelve. If your April through September totals are roughly in line with PVGIS's monthly figures, your system is doing fine. If you are losing the year in those months, you have a real problem. If you are losing the year in December, you are looking at the wrong data.

Year-to-year variance is bigger than people expect

Even after you account for seasonal distribution, year-to-year variance in solar production is larger than the brochure number suggests. A typical Belgian residential system produces somewhere within ±10% to ±15% of its long-term average in any given year, purely because of weather. A sunny year like 2018 or 2022 can produce 15% above average. A cloudy year like 2024 can produce 8% to 10% below.

This range matters because the installer's estimate is a long-term average. Comparing it to a single year's production tells you almost nothing unless you happen to land exactly in the middle of the variance band. After three to five years of data, you can start to identify your own roof's average and stop comparing to the brochure entirely. After ten years, the brochure number is essentially irrelevant: your own data is a better predictor than any installer's estimate ever was.

The implication is that disappointment after year one is often premature, and bragging after year one is also often premature. The system is doing whatever it does. The weather is doing whatever it does. Both will average out, eventually, into something close to PVGIS, give or take a percent. Three years of data is the minimum to make confident statements about whether your system is performing as expected.

Building your own baseline

After you have collected two or three full calendar years of data, you can start building a baseline that is more useful than any external estimate.

The procedure is straightforward. Average your monthly production across the years you have. The result is your roof's expected monthly production, calibrated to your specific orientation, shading, climate, and equipment. From year three onward, you can compare each month to this internal average rather than to the installer's quote. The result is a much tighter comparison: where the installer's annual number had a ±15% noise band around it, your internal monthly average has only the year-on-year weather variance to worry about, which is typically 5% to 15% per month.

This is also where features like HelioPeak's year-on-year comparison views and annual report PDF earn their keep. The point is not to admire the pretty numbers. The point is to have the data assembled in a way that lets you say, on the system's fifth anniversary, something more precise than "I think it has been doing fine". An assembled report that compares this year's monthly totals to the average of the previous four years tells you, at a glance, which months are running ahead and which are running behind. The handful of months that consistently underperform across years usually reveal something specific: a tree that has grown into the morning sun, a panel that has lost capacity faster than its neighbours, a season when the inverter quietly throttled.

When to push back on the installer

There is a narrow but important category of cases where the gap between estimate and reality is not weather, not soiling, not normal variance, but a real failure of due diligence at sale time. The diagnostic for this is whether the gap appears consistently across multiple years and across the comparison with PVGIS.

If your installer quoted 5,400 kWh per year and you are producing 5,000 to 5,500 with weather variance, you are fine. If your installer quoted 5,400 kWh per year and you are consistently producing 4,200 with no obvious explanation, three things to check before calling them.

First, run PVGIS for your exact address, orientation, and tilt. If PVGIS also predicts around 4,200 kWh, the installer over-estimated by 20% at sale time, and the conversation to have is about why.

Second, look at your daily curves at solar noon on clear days. If your system never reaches the peak power you would expect for your panel capacity (a 5 kWp system should hit 4.0 to 4.6 kW briefly on the best spring days), something is wrong at the hardware or wiring level.

Third, check for shading patterns you did not notice during the sales visit. Walk around the property at different times of day, take photos of the panels with the sun in different positions, and look for the obvious culprits. Trees and buildings cast longer shadows in winter than the installer's summer site visit might have revealed.

If all three checks come back clean and you are still 15% to 20% below the quoted estimate, you have a legitimate case to take to the installer. Be specific. Bring PVGIS output, multi-year data, and dated photos. Most reputable installers will respond constructively to evidence; few will respond constructively to "my system isn't working". The data is your leverage.

The estimate that actually matters

Here is the small reframing that, in my experience, makes solar ownership less stressful: the installer's quote at sale time is for the financial decision, not for the performance assessment.

The financial decision is "should I install solar?" The right number for that decision is a reasonable estimate of long-term production, multiplied by a reasonable estimate of avoided electricity costs, integrated over the expected lifetime of the system. The number that comes out is the projected payback period. If that payback period is acceptable, you sign. The estimate has done its job. Whether year one is 4,800 kWh or 5,300 kWh does not matter very much in a 25-year financial calculation.

The performance assessment is a separate exercise, and it starts at year three with your own data, not at year one with the installer's estimate. The reference point shifts from a marketing document to a multi-year average derived from actual measurements. By year five, the installer's quote should be a historical curiosity. By year ten, it has been replaced by something far more accurate: the lived behaviour of the actual system on the actual roof in the actual climate of where you happen to live.

This is, in the end, the whole point of monitoring. Not to second-guess the installer, not to win arguments, not to feel virtuous about kilowatt-hours, but to build a private baseline that makes future judgements better than they would otherwise have been. The estimate is a starting point. The data is the destination.

← Back to blog index
ENNLFRDEITES