Feb 3, 2026 - Revisiting ChatGPT’s financial advice, 15 months later

In Oct/2024 I asked ChatGPT for a portfolio to endure a messy world (original post here). At that time I expected the Israel–Palestine conflict (on top of Russia–Ukraine) to be the main driver of financial uncertainty in the coming months.

Fifteen months later, the conflicts are still there, but the volatility came from a different menu: U.S. surprises, Trump’s tariff war, a U.S. strike on an Iranian nuclear facility, and most recently Maduro’s capture.

This post is my promised check-in: what would have happened if I’d followed ChatGPT’s financial advice? I re-ran the original constraints using a 2024 model snapshot, translated the output into a real ETF portfolio, and measured the results.

The portfolio setup

The original post suggested a broad allocation. To make it testable, I re-ran a prompt in the old GPT-4o model (via OpenAI’s API, specifically gpt-4o-2024-08-06 to avoid lookahead bias) to generate a concrete, investable portfolio, using the exact constraints from that post.

prompt

Construct a portfolio of stock indices, US bonds, corporate bond indices, and commodities, adhering to the following percentages:

50% Equities: Diversify across geographies and sectors (some growth-oriented, some defensive).

30% Bonds: Split between safe-haven government bonds and investment-grade corporate bonds.

20% Commodities & Alternative Investments: This portion can include gold, oil, and possibly some real estate or infrastructure investments that are less correlated with traditional assets.

Suggest only assets accessible to an individual investor via a US investment account.

Here’s the generated response:

response

Creating a diversified portfolio as per your specifications involves selecting specific ETFs and index funds that are accessible to retail investors. Here’s a suggested portfolio:

50% Equities

US Equities (25%)

  • SPDR S&P 500 ETF Trust (SPY): Covers large-cap U.S. equities.

International Equities (15%)

  • iShares MSCI ACWI ex U.S. ETF (ACWX): Provides exposure to large and mid-cap stocks outside the U.S.

Emerging Markets (10%)

  • Vanguard FTSE Emerging Markets ETF (VWO): Targets growth-potential markets.

30% Bonds

US Government Bonds (15%)

  • iShares 7-10 Year Treasury Bond ETF (IEF): Focuses on U.S. Treasury bonds, providing a safe-haven investment.

Corporate Bonds (15%)

  • iShares iBoxx $ Investment Grade Corporate Bond ETF (LQD): Offers exposure to investment-grade U.S. corporate bonds.

20% Commodities & Alternative Investments

Gold (10%)

  • SPDR Gold Trust (GLD): Direct exposure to gold prices.

Oil (5%)

  • United States Oil Fund (USO): Tracks the daily price movements of West Texas Intermediate (WTI) light, sweet crude oil.

Real Estate & Infrastructure (5%)

  • Vanguard Real Estate ETF (VNQ): Provides exposure to U.S. real estate investment trusts (REITs).

Considerations

  • Diversification: The selection covers multiple geographies, sectors, and asset classes, reducing risk through diversification.
  • Accessibility: All assets are available through major U.S. brokerages.
  • Liquidity: ETFs and index funds offer liquidity, making it easier to enter and exit positions.
  • Risk Tolerance: Adjust weights as needed based on individual risk tolerance and investment horizon.

Before investing, consider consulting with a financial advisor to tailor the portfolio to your personal financial situation and goals.

This gave me a realistic, investable snapshot of “what ChatGPT would have done” at the time.

The results (Oct 4, 2024 → Jan 27, 2026)

A couple of days ago I pulled the adjusted prices (dividends, splits, capital gains) from Yahoo Finance (e.g., GLD history) and calculated the ROI for each sleeve.

Asset Weight 2024-10-04 2026-01-27 ROI
SPDR S&P 500 ETF Trust (SPY) 25% 564.43 695.49 23.22%
iShares MSCI ACWI ex U.S. ETF (ACWX) 15% 54.59 71.97 31.84%
Vanguard FTSE Emerging Markets ETF (VWO) 10% 46.56 57.64 23.80%
iShares 7-10 Year Treasury Bond ETF (IEF) 15% 92.01 95.98 4.31%
iShares iBoxx $ Investment Grade Corporate Bond ETF (LQD) 15% 105.48 110.81 5.05%
SPDR Gold Trust (GLD) 10% 245.00 476.10 94.33%
United States Oil Fund (USO) 5% 76.35 75.66 -0.90%
Vanguard Real Estate ETF (VNQ) 5% 90.63 90.39 -0.26%

Portfolio result: +23.74% over 15 months. Not bad.

Closing thoughts

Net result: the portfolio held up. Most of it came from a surprisingly good gold run, with stocks doing their job and everything else mostly hovering. That’s encouraging, since diversification did its job, and also a warning, because one lucky ingredient can dominate the outcome.

So: would you trust an AI to build your portfolio, or are you already using it (even if just as a second opinion)?

Dec 23, 2025 - Scaling Decision-Making: Guardrails Beat Gatekeepers

I walked into a budget meeting expecting the usual spreadsheet tour: what went up, what went down, and where we should trim next year. We did that. But in the middle of the call, something else surfaced: we were debating how decisions should happen, not just what the numbers were.

Someone suggested a simple rule: any spend above a threshold should go to a committee for approval. It was well-intentioned, practical, and it looked good on paper.

My first reaction was: this won’t work.

Not because the threshold was too high or too low. But because the framing was off. It pushed strategy down to the operational level. It asked leaders across the company to act like mini-CFOs, to weigh trade-offs they don’t have context for. It created a system that depended on individual judgment instead of a repeatable path.

That’s the opposite of systems thinking.

Systems thinking

Systems thinking means you don’t fix outcomes only by giving people more decisions, you fix outcomes by designing the system that shapes decisions. If you want different behavior, you redesign the structure: the incentives, the sequence, the guardrails.

In practice, that means leadership has to do the hard part: define the path and remove ambiguity. The operation executes within that path.

So instead of telling everyone “spend less,” I prefer to ask: what are the rails that will naturally lead to better spending?

Enabling constraints (the rails)

That’s where enabling constraints come in. They are rules and flows that:

  • limit the decision space,
  • make the right choice the default,
  • without killing autonomy.

In our case, the first draft of the policy was “anything above X needs approval.” The system-thinking version was different:

  • First, internalize. If a request is about an external vendor, the first step is always to route it through the internal capability. Only if that path is exhausted does the approval flow even start. The point isn’t “protecting internal teams.” It’s reducing avoidable spend and fragmentation when the capability already exists, and escalating fast when it doesn’t.
  • Second, decide in the right moment. The worst time to approve a vendor is when the proposal is already on the table and the deadline is tomorrow. The flow needs to exist upstream, where the decision can be made calmly.
  • Third, separate strategy from execution. Teams should focus on delivery and client outcomes. Leadership should define guardrails with the people who run the work and monitor the system.

The difference is subtle but huge. Instead of asking people to make better decisions in chaotic situations, you reshape the process so the right decision is the easy one.

None of this eliminates judgment. It just moves judgment to the right layer: design the default path once, then let the organization run it repeatedly.

What I proposed

I proposed a lightweight “rail system” built around the reality of our operation:

  1. Define the default path. For certain categories (vendors, external analysis, specialized work), the default is internal first. If it can’t be done internally, the request escalates.
  2. Make the moment explicit. Add checkpoints inside the workflow where the request must be raised early, not when it’s already late.
  3. Keep approvals simple. No ceremonies, no weekly meetings, no bureaucracy. Just clear inputs, fast review, and a decision trail. Clear inputs = scope, deadline, expected outcome, alternatives considered, and a rough cost envelope.
  4. Define exception paths. Legal/compliance requirements, incident response, hard external deadlines, or unique expertise not available internally. If an exception applies, the flow skips steps, but still leaves a decision trail.

In other words: leadership designs the system; the team follows the rails.

Closing thoughts

This is not about being “tough on costs.” It’s about clarity. If you want a company to scale, you cannot rely on individual heroics or ad-hoc judgments. You need a system that consistently nudges people toward the strategic path.

And building those systems is not easy. You need to identify what should change, understand why the current structure enables the old behavior, visualize alternatives, and think through side effects, incentives, and how to earn buy-in from the people involved. That’s real work.

You know the rails are working when urgent approvals go down, repeat vendors consolidate, and lead time for “yes/no” decisions becomes predictable.

The easy thing is to outsource the problem to leaders who are already constrained in their day-to-day. “Just approve everything” sounds simple, but it’s naive to think that passing the burden downstream will solve the root issue.

Build the rails. Then let the train move fast.

Apr 2, 2025 - Anecdotal evidence of increased coding productivity with AI

I have been using AI to help me code (and actually to help me do anything) for a couple of years now. More and more I keep seeing how it boosts my productivity.

When programming, I keep code suggestions minimal in the IDE and really like the chat interface for tackling coding tasks. Writing down the problem, its constraints, and the task goals in a structured way has, in my experience, shown really great results and visibly boosted my productivity. As a bonus, it has positively influenced the way I communicate about technical challenges with colleagues.

So today, I saved some time to share a simple example — the anecdotal evidence. My task was to implement the following feature in a section of our app:

  • Automatically compress uploaded images

It took 25 minutes from start to finish. If I hadn’t used AI, I estimate that it would have taken me about 2 hours to search references, write down the code, test, and review before finishing this task.

👉 That’s roughly a 5x speed-up in this particular case.

Here’s a quick overview of how I structured my workflow using ChatGPT (GPT-4o model):

  1. Define the Problem: I started by clearly defining my task, which was to compress uploaded images automatically in two user flows.
  2. Generate Code: I asked ChatGPT to create a method for compressing images to high-quality JPEG, providing contextual code from my existing project.
  3. Integrate Changes: Next, I requested ChatGPT to modify two existing image upload methods, incorporating the new compression method.
  4. Create Unit Tests: Lastly, I asked ChatGPT to write a unit test for the new compression functionality following existing testing styles in the project.

Let’s take a deep dive.

The Workflow in Detail

Initially, I reviewed the source code to understand the user flows I needed to change and to locate the image upload entry points.

After getting familiar with the codebase, I concluded that the project didn’t yet define a method to compress images to JPEG. I also found an ImageExtensions class that contained several methods for performing operations on images (e.g., Rotate, Resize, Generate thumbnail, etc.), which I decided was the right place to define the new compression method.

So, in my first prompt to ChatGPT, I asked it to create the compression method, passing along additional contextual information to ensure the generated code would be compatible with my project:

prompt

In C# I’m using the following code for resizing images:

using System;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Drawing.Imaging;
using System.IO;
using System.Linq;

public static Image Resize(this Image image, int newWidth, int newHeight, string url = null) 
{
    using (var newImage = new Bitmap(newWidth, newHeight))
    {
        foreach (var item in image.PropertyItems)
            newImage.SetPropertyItem(item);

        using (var graphics = Graphics.FromImage(newImage))
        {
            graphics.SmoothingMode = SmoothingMode.AntiAlias;
            graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
            graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
            graphics.DrawImage(image, new Rectangle(0, 0, newWidth, newHeight));
            var format = url != null ? GetImageFormat(url) : image.RawFormat;
            return Image.FromStream(newImage.ToStream(format));
        }
    }
}

Using the same referenced namespaces, how can I define a method for compressing images?

I want to compress PNG images into JPEG with high quality.

In its response ChatGPT correctly produced the following method (I’m omitting additional information and the usage sample also returned by ChatGPT):

response

public static Image CompressToJpeg(this Image image, long quality = 90L)
{
    if (image == null) throw new ArgumentNullException(nameof(image));

    // Set JPEG quality encoder
    var jpegEncoder = ImageCodecInfo.GetImageDecoders()
        .FirstOrDefault(c => c.FormatID == ImageFormat.Jpeg.Guid);

    if (jpegEncoder == null)
        throw new InvalidOperationException("JPEG encoder not found");

    var encoderParams = new EncoderParameters(1);
    encoderParams.Param[0] = new EncoderParameter(Encoder.Quality, quality);

    using (var ms = new MemoryStream())
    {
        image.Save(ms, jpegEncoder, encoderParams);
        return Image.FromStream(new MemoryStream(ms.ToArray()));
    }
}

I skimmed through the code, and it seemed to implement everything I requested, so I copied and pasted the generated code into my project’s ImageExtensions class.

I continued by asking ChatGPT to adjust two of my application user flows to use the new method, with additional considerations:

prompt

Now consider these two methods which handle Image objects:

    private string SaveCover(string coverName, Stream coverStream)
    {
        var path = GenerateImageName("covers", coverName);
        using (var img = Image.FromStream(coverStream))
        using (var stream = img.ToStream())
            return store.SaveObject(stream, path);
    }

    private string SaveIcon(string iconName, Stream iconStream)
    {
        var path = GenerateImageName("icons", iconName);
        using (var img = Image.FromStream(iconStream))
        using (var stream = img.ToStream())
            return store.SaveObject(stream, path);
    }
  1. Check if the file name indicates that the image is PNG (or other uncompressed image file).
  2. In case it is, call CompressToJpeg() and replace the name extensions to .jpeg.

As a result, both methods were modified as requested, and ChatGPT proposed a helper method IsUncompressedImage to be used by both methods in accordance with the DRY (“don’t repeat yourself”) principle.

Here’s the final code generated by ChatGPT for one of the methods after the introduced changes:

response

private string SaveIcon(string iconName, Stream iconStream)
{
    var isUncompressed = IsUncompressedImage(iconName);

    var finalName = isUncompressed
        ? Path.ChangeExtension(iconName, ".jpeg")
        : iconName;

    var path = GenerateImageName("icons", finalName);

    using (var img = Image.FromStream(iconStream))
    {
        var processedImage = isUncompressed ? img.CompressToJpeg(90L) : img;
        using (var stream = processedImage.ToStream(ImageFormat.Jpeg))
        {
            return store.SaveObject(stream, path);
        }
    }
}

Again, I reviewed the code and was happy with the proposed changes. After pasting it into my application code, everything compiled without any issues.

Finally, before closing the code changes, I wanted to create a unit test for the new CompressToJpeg method, so I sent another message to ChatGPT to create the test method analogous to another test in my project:

prompt

Now, create a unit test for the “CompressToJpeg” method. Follow the style below:

    [TestMethod]
    public void GetGeolocatedImageCoordinateTest()
    {
        var image = Image.FromFile(@"Resources\geolocated.jpg");
        var c = image.GetCoordinate();

        Assert.IsNotNull(c);
        Assert.AreEqual($"{-22.8939933776855}", $"{c.Latitude}");
        Assert.AreEqual($"{-43.1236000061035}", $"{c.Longitude}");
    }

The test should:

  1. Load a @”Resources\Boots.png” image
  2. Convert it to Jpeg
  3. Assert that the compressed image size is less than 25% of the original image size

Don’t save the compressed image to disk.

And voilà, the unit test was correctly created according to my request:

response

[TestMethod]
public void CompressToJpegTest()
{
    var image = Image.FromFile(@"Resources\Boots.png");

    using (var originalStream = new MemoryStream())
    using (var compressedStream = new MemoryStream())
    {
        // Save original image as PNG to stream
        image.Save(originalStream, ImageFormat.Png);
        var originalSize = originalStream.Length;

        // Compress to JPEG and save to stream
        var compressed = image.CompressToJpeg(90L);
        compressed.Save(compressedStream, ImageFormat.Jpeg);
        var compressedSize = compressedStream.Length;

        // Assert that the compressed image is less than 25% of the original size
        Assert.IsTrue(compressedSize < originalSize * 0.25,
            $"Expected compressed size < 25% of original. Original: {originalSize}, Compressed: {compressedSize}");
    }
}

After pasting the new test method into my test class, I was able to quickly run it and confirm that it passed 🟢.

Once I finished the code changes, I performed a manual test to validate that the task was completed:

Upload request in Chrome's network tab

Success!

Wrapping Up

This was just one of many examples where AI tools can turbocharge the time it takes to complete manual code structuring and fiddly tasks. This task was a great candidate because it was mostly digital plumbing, relying on high-level operations (like JPEG compression) already provided by the underlying framework.

Several studies try to capture how much productivity AI tools can bring. At the end of the day, it all depends on how frequently and how well you use them in your workflows.

As with anything, usage brings experience, and we become better at identifying the best situations and strategies for extracting productivity from these tools. A naive approach of blindly using AI in everything may backfire.


My complete conversation with ChatGPT can be accessed in the link below: