I’ve been running into several problems with restoring MySQL backups. Namely, the backups come from an environment other than the one I’m working in and I’m forced to remove superuser commands contained in the backups.
The problem is when trying to remove those commands I’m constantly getting UTF-8 encoding errors because there are loads of invalid character sequences.
Why would MySQL encode a backup as UTF-8 if the data isn’t actually UTF-8? This feels like bad design to me.
Not sure if this helps you, but for anyone working with utf8 and MySQL, it’s worth reading up on the details of their Unicode support. Especially the part where it says that ‘utf8’ is an alias for ‘utf8mb3’, which may not be compatible with what other systems consider to be ‘utf8’. If you aren’t careful with this you will have problems, especially with high code points, like emoji.
Not only are there different character sets that seem like it’s Unicode, but the set in MySQL can change based on the session, the client, the server, the db , the table and the column. All six of them can have different encodings.
Just make sure all are using the same 4 byte Unicode. Different collation is ok when backing up because only important when comparing strings.
That’s… extremely useful to know and highlights the issues I have with databases like MySQL.
IMO, a DB should always have a type defined for a field, and if that type is UTF-8, and it means just the mb3 subset, you should only be able to store mb3 data in it. Not enforcing the field type is what leads to data-based function and security issues. There should also be restrictions on how data is loaded from fields depending on their type, with mb3 allowing for MySQL transform operations and binary requiring a straight read/write, with some process outside the DB itself handling the resulting binary data stream.
/rant
Character encoding and type coercion errors are so common. But a lot of bugs also come from programs trying to do “the right thing”. Like in OP’s case: they are just trying to import some data and maybe the data was never even intended to be interpreted as utf8, but the tool they are using to remove the commands wants to treat it that way. Sometimes the safest thing to do is to just assume data is binary until you care otherwise.
This is the right answer. I had the job of planning a schema update to fix this shitty design.
Saying that, unicode and character formats are incredibly complex things that are not easily implemented. For example two strings in utf-8 can contain the same number of characters but be hugely different in size (up to 3-4x different!). It’s well worth reading through some articles to get a feel of the important points.