| 1. What’s Up with Daylight Saving Time? A Brief History and Analysis with Wolfram LanguageСр, 05 мар[-/+]Категория(?) Автор(?) In the next few days, most people in the United States, Canada, Cuba, Haiti and some parts of Mexico will be transitioning from “standard” (or winter) time to “daylight” (or summer) time. This semiannual tradition has been the source of desynchronized alarm clocks, missed appointments and headaches for parents trying to get kids to bed at the right time since 1908, but why exactly do we fiddle with the clocks two times a year?
Why Do We Have Daylight Saving Time in the First Place?
The Sun has been humanity’s primary source for measuring the passage of time for almost all of human history, and, while it’s quite predictable for day-to-day uses, it has always had a few catches that have made timekeeping over longer durations or distances tricky. Unless you happen to live at or near the equator (in which case, you have a nearly constant 12-hour day/night cycle every day of the year), you’re no doubt aware that the length of the day changes throughout the year:
Compare this with the same period of time for a city located close to the equator:
This phenomenon gets more pronounced the farther away from the equator one moves:
Prior to the nineteenth century, most communities used local time determined by the Sun overhead. This variation throughout the year had little impact because time synchronization wasn’t necessary across long distances. You can see what your local solar time would read on a sundial with the SolarTime function:
This is pretty close to my current wall clock time (prior to the daylight saving time shift):
However, with the progression of industrialization and urbanization favoring the use of mechanical clocks (and in particular the advent of long-distance rail travel and telecommunication), standardized time quickly became a necessity. In 1847, Greenwich Mean Time (GMT) became the British standard, placing noon at the time when the mean Sun reached its zenith in Greenwich. You can see this in the time zone offset information for London, which prior to that time used a local solar time that was a fraction of an hour off from GMT:
This was, however, not without its problems. By midsummer, some parts of the UK were seeing sunrise near 3am, while sunset was happening at 9pm:
And because people didn’t simply adjust their daily routines to match the sunlight (because they were now typically working based off of standardized mechanical clocks) this resulted in “wasted” sunlight in the morning while people were sleeping and excess use of energy on artificial lighting in the evening.
People were quick to suggest resetting the standard time throughout the year to more closely align with daylight, but the idea didn’t really catch on until World War I, where it was motivated largely by fuel preservation—and it quickly caught on across Europe. During World War II, the UK actually instituted British Double Summer Time in which the clock was moved forward two hours during the summer to maximize the use of natural light.
Compare the fraction of the summer wherein Glasgow would have a pre-6am sunrise staying on GMT compared with a two-hour shift later:
Compare the same shift difference with a pre-9pm sunset over the same time:
Keeping Up with Daylight Saving Time
There have been many revisions to the schedule for daylight saving time within countries that observe it. There is even an entire database dedicated just to tracking these changing schedules across the globe, which is updated multiple times per year. This includes differing start/stop schedules, changes in regions that observe which shifts and which countries have opted to stop observing daylight saving time shifts altogether (typically choosing to stay on “summer time” when doing so).
Most of Mexico opted to stop observing daylight saving time at the end of 2022, for example:
To give this a try with another time zone, LocalTimeZone will identify the name of a time zone based on a location. You can also use TimeZoneConvert to identify the current time in a set location:
Changes in daylight saving time schedules, as well as the different dates on which offset changes take place, can lead to scheduling headaches for things like teleconferencing. Take, for example, the difference in time between offices in Chicago, Glasgow and Sydney throughout a period of just six weeks:
Because of the different onset and end dates for daylight saving time, nearly every week between the beginning of March and the first week of April ends up with some new difference in time zone offset between the three offices. The same thing happens again in the second half of the year when the first two cities transition off daylight saving time and the Australian office transitions back to it:
The US has also proposed (but not yet codified) a transition off of daylight saving time. Only time will tell if this semiannual tradition will continue, but in the meantime, Wolfram Language provides many tools for measuring and managing these time shifts. For more analysis on daylight times across the world, be sure to check out these posts from Wolfram Community:
Visualizing hours of daylight on the summer solstice
Circular sunset/sunrise calendar | ↑ |
2. A Whole New Ball Game: Game Theory in Wolfram Language 14.2Вт, 25 фев[-/+]Категория(?) Автор(?) Do you want to make optimal decisions against competition? Do you want to analyze competitive contexts and predict outcomes of competitive events? Do you need to elaborate strategies and plans against adversity and test the effectiveness of those strategies? Or are you simply an undergraduate student struggling to cope with a required course on game theory at your college?
Wolfram’s new suite of game theory functions will enable you to generate, play with, test, solve and visualize any event using game theory.
History of Game Theory
Originally, game theory was limited to simple games of chance. These have a few common characteristics: only two players are involved at a time and either one player wins and the other loses or both players have a zero payoff. These games are known today as two-player, zero-sum games.
If we define game theory based on the elaboration of optimal strategies, game theory may be as old as games of chance. We owe this early analysis (and probability theory!) to the famous polymath and gambler, Girolamo Cardano. Alternatively, if we define it on the analysis of games based on the possible actions of all players, then we may attribute its origins to James Waldegrave, for the analysis of the Le Her game in 1713, where minimax strategy solutions were given.
Of course, games can be more than just entertaining. After all, few games can claim to have a player base as big as the game of economics. Antoine-Augustin Cournot, in his 1838 research of the mathematical principles in the theory of wealth ( Recherches sur les principes mathematiques de la theorie des richesses ), discovered solutions to the price competition that would later be called Nash equilibria.
However, most of these discoveries are somewhat isolated and are usually considered as mere precursors to the modern subject. Game theory officially starts with John von Neumann’s 1944 book Theory of Games and Economic Behavior , where the term “game theory” was coined and the axioms of game theory were determined—thus establishing its own field. This overview of game theory history would be incomplete without mention of John Nash, whose existence theorem for Nash equilibria transformed game theory only a few years later in 1950.
As you may imagine, this field of study has grown tremendously since then. Modern game theory is best summarized as the mathematics of decision making. At its heart, it studies the behavior of human, animal and artificial players in all forms of competition. Herbert Gintis said it best:
“Game theory is about how people cooperate as much as how they compete Game theory is about the emergence, transformation, diffusion and stabilization of forms of behavior.”
Matrix Games: Cat and Mouse
Matrix games are also known as simultaneous games. Indeed, these games are characterized by the simultaneity of the actions of all players. As the name implies, matrix games are based on matrices. To be precise, any matrix game of n players may be expressed by an n + 1-dimensional array, where the last dimension is a vector of payoffs for all players.
Matrix games can be generated using the new MatrixGame function in Version 14.2. For example, here is a two-player game in which each player has a choice of two actions:
This game is a zero-sum game, as can be seen from the Dataset representing the payoffs for each player:
We use a payoff of 1 to represent winning, and a payoff of –1 to represent losing. As such, in this game, the first player wins when the actions of both players are matching, and the second player wins when the actions of both players aren’t.
While simultaneous games are usually expressed in terms of matrices, these rapidly became difficult to read as the number of players and actions increase. Hence, our team developed MatrixGamePlot for visualizing this class of games:
MatrixGamePlot and other functions are designed to work for games with any number of players. You may find that "SplitSquare" is a more intuitive layout for games with two players, while games with more than two players are better visualized using the default "BarChart":
As good as these visualizations may be, it is difficult to infer the story behind this game just from general visualizations. For ease of interpretation, consider two players: a cat and a mouse. The cat can either search the house or the yard, and the mouse can hide in the house or the yard. Of course, the cat wins if it is in the same place as the mouse and if they are not in the same place, the mouse wins.
The dataset and plot for the cat-and-mouse version of the game may be more easily read by specifying GamePlayerLabels and GameActionLabels in this game:
Of course, cats are lazy. It seems most likely that our cat is too sedentary to choose to go out of the house ( ) for a mere mouse. Knowing this, an opportunistic mouse should choose to always go to the yard ( ), as this should lead to a higher payoff. To verify this, the mouse should use MatrixGamePayoff on this game and strategy, which allows it to calculate the expected payoffs of a game based on a given strategy:
But what if the cat isn’t lazy? Some cats have those mystical eyes, unpredictable, a perfect poker face. To our mouse, it seems this cat will do whatever is necessary to win. The mouse must revisit its strategy, making it stable and strong, making sure it is at least as likely to win as the cat. The mouse uses VerifyMatrixGameStrategy to test all strategies where each animal chooses either the house or the yard, and to verify if a particular strategy is a Nash equilibrium. Unfortunately, it seems that all cases are unstable and the cat may be at an advantage:
Our mouse has one last ace up its sleeve: FindMatrixGameStrategies ! This powerful function is our game solver, computing Nash equilibria, that is, strategies that both players have no interest in changing. Using this tool, the mouse realizes its salvation lies in randomness:
This strategy is mixed, where the probability of some actions is not 0 or 1. In this case, the mouse will have to do a coin flip: if head, stay in the house, if tails, go to the yard. Note that the cat cannot take advantage of this strategy, so its best strategy is also a 50/50.
Granted, the game so far is a bit unrealistic. The mouse likely has many more places to hide, and the cat has many more places to search. This would imply that each player has more than two actions and indeed there are other games for which this happens. For example, in the Morra game, each player has 50 possible actions. All previously seen features are generalizable to this or any number of actions:
Of course, it’s hardly a party when only two players play at one time. You can also specify matrix games such as El Farol Bar with any number of players, although I wouldn’t wish that many mice on even my worst enemy:
Here, we use GameTheoryData , a helpful tool for classical games that will be explained later.
Tree Games: A Game-Changing Revolution
Tree games are also known as sequential games. In these games, there are multiple actions, each taken by a single player in a given order. Like a decision tree, each decision reduces the number of possible actions until an end node is reached, where all players have a given payoff. As the name implies, tree games are based on Tree data structures. As such, they can be generated using nested lists following the structure of trees. Tree games are a big revolution in game theory, as, instead of a single event, they allow the analysis of a group of asynchronous interconnected decisions. This extends the applicability of game theory to many complex phenomena, otherwise out of reach using matrix games. Chess, for example, is a tree game, although an extremely large one, making its direct analysis as a tree game impractical.
Calling something a game usually implies that it is done for entertainment and not to be taken too seriously. That’s not the case in game theory, where a game refers to an event consisting of one or multiple decisions. Indeed, by that definition, almost everything that ever happened is a game, from throwing a rock into a lake to overthrowing a monarchy. Consider the latter as a “game,” where a colony has the choice to either rebel or concede, and in return, if there is rebellion, the country may grant independence or suppress the rebellion, and if the colony concedes, the country may tax or not. This situation can be represented using TreeGame :
Classically, tree games are represented as trees. As such, the use of the "Tree" property may be sufficient for basic tree games. However, to have more control over the plotting and to represent accurately more advanced tree games, TreeGamePlot is likely a better visualization:
If the colony rebels, the country has the choice to either grant independence or suppress rebellion. Since losing a colony is costly, the country always has greater interest in suppressing rebellion, as shown by comparing the second payoff of each outcome:
Even only in terms of game theory, it is evident the country is advantaged in this game. Often in tree games, it is the player that plays last that is advantaged, as the payoff is chosen directly. In this case, whatever the colony employs, the country can ensure a positive payoff by simply choosing to suppress and tax depending on the action of the colony:
It turns out that being taxed is better than being suppressed for this colony. Thus, even though this may not satisfy both players, the subgame perfect equilibrium is found when the country taxes the colony. This can be shown by solving the tree game using FindTreeGameStrategies :
Of course, tree games can have more than just two consecutive actions. For example, consider this game called Centipede, for which the name choice is still a mystery to me:
Tree games aren’t limited to two players either. For example, consider an inheritance game where we track the inherited golden cactus belonging to grandfather Zubair through his entire family:
Game Data: A Numbers Game
Game theory is plagued with a seemingly limitless number of named games. Without playing the blame game, let’s just say it makes the subject unreasonably daunting to beginners. Truth be told, with a decent understanding of matrix and tree games, you’ll generally be able to understand 99% of all named games. Whether you need a game quick and easy or you’d like to analyze a game you’ve never heard of before, GameTheoryData can help you. GameTheoryData currently has nearly 50 curated games:
It turns out it is quite difficult to infer the meaning and utility of a game just from a list of payoffs. Since each game has its own story, origin, quirks and properties, we wanted to allow users to explore the richness of game theory with well-curated games right within the language. Thus, each game has handy features enabling anyone to understand, research and characterize it. You can find a textual description of the game:
Learn about its origins in the literature:
Find the game classes it belongs to:
And, of course, play with it:
Ahead of the Game
We know that game theory can be a bit obtuse to the uninitiated. In this respect, our first goal with these game theory features is to ease the process of constructing and playing with games, keeping it accurate and useful. Using extensive and approachable documentation, all these features have been kept in line with the fun and interesting character of games:
In fact, we ve poured a lot of effort into the documentation of these functions. The number of examples, explanations and extensive consideration of even minute features goes far beyond the typical documentation for new features at launch. This level of documentation should enable you to pick up game theory in a matter of minutes:
Links to all these resources are readily available in one place to enable users to have a complete overview of these functionalities:
Game Over
Our current game theory features cater to learners of game theory more than other clientele, but don’t worry. We won’t declare “game over” early. We have many ideas for the future of Wolfram’s game theory suite. We know Wolfram features aren’t complete without vast generalizability. It may take some time, but we hope some generalizations of current functions may lead to great applicability to research and professional contexts. Most importantly, we’d love to get some feedback about what our users would like to see in the next iteration of these features.
Don’t be late to the game and try out these new features for yourself! | ↑ |
3. Nobel Prize–Inspired de novo Protein Design with Wolfram LanguageЧт, 20 фев[-/+]Категория(?) Автор(?) When I read a recent New York Times article on AI, I didn’t think I would be following the footsteps of a Nobel laureate, but I soon discovered that I could do just that with Wolfram Language.
The Nobel Prize in Chemistry for 2024 was awarded for computational protein design and protein structure prediction, which have been active areas of research for several decades. The early work was built upon a foundation of physics and chemistry, attempting to model the folding of the chain of amino acid residues comprising a protein into a three-dimensional structure using conformational analysis and energetics. More recently, AI methods have been brought to bear on the problem, making use of deep neural networks (DNNs) and large language models (LLMs) such as trRosetta, AlphaFold and ESMFold. The work of David Baker, one of the laureates, was recently showcased in a New York Times article.
In their 2021 paper, Baker’s group described computational experiments that optimized a random sequence of amino acids into a realistic protein sequence and folded it into a three-dimensional structure. This process was repeated 2,000 times giving a “wide range of sequences and predicted structures.” The really exciting part came next: they made 129 synthetic genes in the lab based on the sequences, inserted them into the genome of E. coli bacteria, isolated and purified the new proteins and obtained their structures by x-ray crystallography and NMR spectroscopy, which closely matched the predicted structures.
We set out to explore the “computational X” part of their experiment in Wolfram Language. Some of the new features of the just-released Version 14.2 made this task surprisingly simple.
Folding an Amino Acid Sequence
Let’s begin with a protein of known structure. The N-terminal domain of the amyloid precursor protein (APP), which is implicated in Alzheimer’s disease, is a good example. The Protein Data Bank (PDB) entry ID is 1MWP. We can retrieve the citation title for the published data with this new-in-14.2 service connection request:
And we can retrieve the structure as a BioMolecule with this request:
We can also get the same result with much less typing with:
The full crystallographic structure is comprised of a single protein chain and many water molecules that co-crystallized with the protein molecule. The water molecules are collected into their own chain for convenience, so the BioMolecule has two chains:
The protein structure is comprised of a single chain that possesses two ?-helices and several ?-strands that can be seen visually here:
and tabulated here as residue ranges:
The "ESMAtlas" service is also new in 14.2 and allows one to fold a sequence using the model from Meta AI. This is the service request to fold the amino acid sequence:
The folded structure is also composed of an ?-helix and several ?-strands:
However, we can see here that the longer helix in the AI-folded structure is two residues shorter than in the crystal structure, and residues 73, 74 and 75 do not form a helix:
Two ?-strands have been lost in the AI-folded structure:
So, how good is the fold quantitatively? The ESMAtlas service computes an atom-wise confidence which is stored in the "BFactors" property of the BioMolecule. The individual values range from 0 to 1, with higher values indicating greater confidence of the predicted three-dimensional position. Here are the atomic confidences for the atoms of the first five residues:
We can use these values to compute an overall confidence of the folded sequence, specifically, the root mean square of the atomic values:
That’s pretty high, so it should be “close” to the experimental structure, and we can get an exact numerical comparison with the function BioMoleculeAlign, which is a prototype based on MoleculeAlign :
An RMS difference (RMSD) of the backbone atoms of 1.38 A is pretty good, and visually we can see that the folded structure closely matches the experimental structure fairly closely. As expected, the deviation is largest at the N- and C-terminal portions of the protein:
To get a broader overview of the folding accuracy, we did a search on the PDB website for monomeric, single-chain proteins with 95–105 residues, with the structure determined by x-ray diffraction, and a final resolution of 2.0 A) at the source, but there are several other potential issues that need to be dealt with.
First, databases are not perfect. Even though the search specified “protein entities,” some proteins conjugated with oligosaccharides were included in the search results. They have the chain type "Branched":
Here is what the first one looks like. The sugar moieties are rendered at atomic-level details, as in MoleculePlot3D :
So, let’s remove those two hits:
Second, Meta AI’s protein folding model only accepts sequences comprised of a very limited number of the more than 500 known naturally occurring amino acids, many of which are found in proteins in the PDB. There are 21 proteinogenic amino acids that are coded for by DNA, and ESMFold uses only 20 of them (selenomethionine is the maverick amino acid).
Amino acids are often represented with their three-letter abbreviations, Ala for alanine, Trp for tryptophane, etc. For even more brevity, biologists also use one-letter codes (only the proteinogenic amino acids have one), as shown in this table:
We can use the one-letter codes to construct a filter:
Let’s test it with APP that we retrieved from the PDB above:
So far, so good. The synthetic peptide 5V63 was made to study the oligomerization of the ?-amyloid protein and contains ornithine, sarcosine and iodophenylalanine. It should fail the test:
Great! Now to filter the hits:
Third, x-ray crystallography is not perfect. Many crystals are not ideal and contain defects. One common defect in crystals of proteins is disorder where a portion of the protein does not crystallize the same way in every unit cell, and this phenomenon effectively blurs the electron density (it’s the electrons that scatter the x-rays) and the atoms cannot be located. The disorder often happens at the ends of the protein chain, but it can also happen where there are loops between ?-helices and ?-strands.
To make the comparison of the protein folding results most informative, we should remove those hits that have fewer observed residues than the full sequence. The first hit, 1A68, has unobserved residues, as indicated by the smaller modeled monomer count:
Processing the whole list and selecting those entries with equal counts gives us the final list of hits:
And, finally, we can do the analysis, that is, folding the sequence and comparing it to the experimental geometry:
Most of the folded structures agree with the experimental structure quite well with an RMSD of 4 A or less:
Overall, the results look quite good. A large RMSD is to be expected when the fold confidence is less than 0.75, so the unexpected outliers have confidence greater than 0.75 and an RMSD greater than about 5 A. What are they?
The structure 4J4C is the G51P mutation (proline replacing glycine at residue position 51) of 3EZM. Both are head-to-tail dimers that are intertwined. We can use the “assembly” information of the biomolecules to view the dimers (one half of each dimer is shown in blue and the other half in yellow):
The ESMFold model assumes the input sequence is for a monomeric structure, so it’s not surprising that it fails for these intertwined dimers.
Optimizing a Random Sequence
Baker’s group carried out the de novo design by first constructing a random sequence and then iteratively mutating one residue at a time. The position of the mutation was randomly selected from a uniform distribution as was the new amino acid. The sequence was folded at each iteration and the change was accepted if the fitness of the predicted structure of the mutated sequence, Fi , increased. For a decrease of the fitness, the change was accepted based on the Metropolis criterion
where t is the temperature, which was decreased in steps over the course of the iteration, effectively giving a simulated annealing algorithm. Strictly speaking, simulated annealing uses energy instead of fitness, and the temperature then has a physical meaning. They used the contrast between the inter-residue distance distributions predicted by the trRosetta network and background distributions averaged over all proteins as the fitness and an initial temperature scaled appropriately. They used initial sequences of 100 residues and an arbitrarily large number, 40,000, of iterations. We’ll follow this basic outline and adapt as needed for Wolfram Language.
Initial Random Sequence
We’ve already talked a little bit about amino acids and protein sequences. The BioSequence returned by the "BioSequences" property of a BioMolecule can return either the three-letter code or the one-letter code sequences as a string. For the amyloid precursor protein, we have:
Using the one-letter codes will be convenient for constructing random sequences and manipulating them:
Here is a random sequence of 100 residues:
Folding the sequence gives a BioMolecule, as we have seen before:
We can see that it doesn’t have any secondary structure elements and doesn’t look very much like a naturally occurring protein:
The residues have been colored starting at the N-terminal end blue through green, to yellow, to orange and finally to red at the C-terminal end.
Fitness
While we can compute the inter-residue distance distributions for the predicted fold, we don’t have the background distributions averaged over all proteins (e.g. from the PDB) used by Baker’s team, and therefore we cannot compute the divergence to use as the fitness.
However, all is not lost because as we have seen above, we can compute an overall confidence of a fold, which should be suitable as fitness. Not surprisingly, the fitness of the predicted fold for this random sequence is not very high:
Residue Mutation
The next thing we need to be able to do is mutate the sequence. First, a position in the sequence is randomly chosen, and then the amino acid at that position is replaced by a different amino acid:
We can use Diff to see where the mutation took place:
The valine at position 96 was replaced by alanine. That is, we have made the V96A mutant. What is the effect on the structure?
Interestingly, the effect it is not entirely local, and three ?-helices have emerged well separated from the location of the mutation (the short red helix). Does that lead to an increase or decrease in the overall fitness?
Simulated Annealing Optimization
The fitness, i.e. confidence of the prediction, has decreased slightly. The Metropolis criterion for accepting the mutation is computed as:
The test for acceptance is:
So this change with its slight decrease in fitness would be accepted.
We can roll up the preceding code into a function for sequence optimization. The ESMAtlas service limits the number of API calls a user can make in a given period of time, but the details are not disclosed. A pausing mechanism has been built into the code to accommodate the throttle imposed by the service. We’ve also included a progress monitor because making API calls can be slow depending on how many other users are calling the service:
This is an example of the progress monitor display:
To keep this proof-of-concept exercise tractable, let’s use only 1,000 iterations:
Optimization Results
The returned result is a list of pairs of the form {sequencei ,fitnessi }, where the fitness is the overall confidence of the fold. Let’s take a look at the final sequence to see how we did:
That’s really quite nice! Are there any structures in the PDB that have a similar sequence?
None. How about the UniProt database? This search was done manually, and it found one hit. Here is a snippet of the raw BLAST output:
Less than half of our sequence (residues 2–46) had some similarity to residues 242–286 of the 467-residue protein 3-isopropylmalate dehydratase large subunit. Statistically, the hit is not very good. The E-score is 9.7 (the “Expect” value in the output), and good homology matches have an E-score of 10-5 or less. It’s safe to say that our de novo–designed protein has not been seen before.
What else can we learn from the optimization? Here is how the fitness improved over the optimization:
A large fraction of the iterations did not change the sequence:
And here is how the change in fitness evolved (the zeros have been elided):
Here is where the mutations took place over the course of the optimization:
And here is the residue position frequency distribution:
Twelve residue positions (6, 8, 46, 53, 62, 64, 69, 71, 82, 83, 96, 98) were not modified. Either they were not selected or the changes happened to be deleterious and failed to pass the Metropolis criterion. The best remedy would be to use more iterations (Baker used 40,000).
How did the amino acid content evolve, and what is the final distribution?
Isoleucine (I), arginine (R) and threonine (T) are the most frequent amino acids in the last sequence, and methionine (M) was lost altogether.
How did the geometry of the fold evolve? Let’s take 10 examples from a geometric progression through the iterations and fold them:
Now, starting with the last biomolecule, align the preceding biomolecule to it and repeat the process sequentially back to the first biomolecule of the sample. Computationally we do this with the function Fold , consuming elements of the reversed list of structures and appending each aligned structure to the growing result list:
Plotting the alignment RMSD will give a rough idea of the pairwise structural similarity of the sample:
Now, let’s take a look at the structures. In the plots below, the residues have been colored by the confidence of the prediction. The first number in each panel is the overall confidence of the fold, and the second number is the step in the iteration:
By the time the overall confidence reaches about 0.7, the fold has settled down.
Another way to assess the evolution of the fold makes use of inter-residue contact maps. As observed by Baker’s team, the maps are initially diffuse and become sharper over the course of the optimization:
Out of curiosity, we manually submitted the optimized sequence to the trRosetta server. Here are the folds that were predicted with the use of templates, along with the overall confidence:
The trRosetta model gives an atomic position confidence on a scale of 0 to 100, and the overall confidence for each fold is not very high.
The folding report included these remarks:
The confidence of the model is very low. It was built based on de novo folding, guided by deep learning restraints.
This is a single-sequence folding with trRosettaX, as no significant sequence homologs was detected (in uniclust30_2018_08).
How do they compare to our optimized structure predicted by ESMAtlas? Not very closely based on the alignment RMSD:
How Much Is Enough?
So, how many iterations is enough to get useful results? Even with today’s fast personal computers and fast internet speeds, it can take hours to evolve a sequence using the API service. As we saw above, 1,000 iterations is not enough to sample all 100 residue positions even once, much less repeatedly, to find an optimal amino acid for each.
5,000 Iterations
This iteration is not merely an extension of the previous one, even though they both began with the same initial random sequence and the same random state. This is because the half-life of the temperature decay is longer, allowing this iteration to diverge as soon as the Metropolis criterion gives a different outcome. This fact becomes obvious once the fitness of the two iterations is compared:
The final optimized sequence bears no resemblance to the one for the shorter iteration:
The amino acid distribution is quite different, also. Most notably, several amino acids are no longer present: phenylalanine (F), histidine (H), proline (P) and tryptophane (W):
So, what is the fold for this sequence?
It’s all one long ?-helix! Considering that there are no proline residues, this result is not surprising. Proline contains a ring that restricts the rotation about the angle, usually resulting in a kink in the backbone at that position. So, no proline residues means no turns:
The plots below show that the general topology of the final fold is reached fairly early, at about 40% of the way through the iteration when the confidence is around 0.75, as we saw in the shorter iteration:
10,000 Iterations
The evolution of the fitness for all three iterations is shown below. The 5,000- and 10,000-step iterations have produced sequences with very high confidence, but they have by no means converged to a maximal confidence, as shown in the inset:
While the sampling of the residue positions is about twice as high as the shorter 5,000-step iteration, it is more diffuse:
Moreover, only 15% of the residue positions experienced every amino acid at least once (all-green columns in the plot below), and none of the amino acids resided at least once at each of the residue positions (all-green rows, only alanine came close). So, going to 20,000 or even 40,000 steps (as did Baker) many be necessary for adequate annealing:
We again see the loss of several amino acids, and notably proline is among the absent:
The final sequence is:
Another all ?-helix fold, as foretold by the complete absence of proline. How similar are the two sequences?
There is some similarity between the two sequences, but it’s not very high. Can we discovery why a proline-free sequence is the result of this optimization?
The Curious Case of Proline
One hypothesis might be that once proline is absent, it’s hard to restore it. The first proline-free sequence is encountered at step 2602:
What is the probability of a successful replacement by proline at each residue position? To compute those probabilities, we need the fitness (overall confidence of the fold) of the proline-free sequence and each of the proline-substituted sequences:
We also need the temperature at that step:
And finally the probabilities:
We see that a large fraction of the residue positions have a very low probability, but there is still a sizable number with a probability of 1, and, in fact, proline does reappear at a later step only to be ultimately lost. The last step to lose a proline is 5885:
Computing the probabilities to restore a proline at this point in the iterations follows:
So it has become very much harder to restore proline to the sequence and one ends up with a sequence that folds into an all ?-helical form.
A Weighty Improvement
Not all residue positions are created equal, and when it comes to structure prediction, some are better characterized than others. Since we’re doing design by optimization, could we improve the process by preferentially mutating the residue positions with lower prediction confidence? That is, give more attention to the less-well-defined regions.
We used residue-wise confidence to color code the folded sequences, and we can use the same residue-wise confidence to weight the residue selection in mutation.
Here is a refactored mutation function that can take an optional set of residue position weights:
Taking the optimized sequence from the 1,000-iteration optimization as the input, here is a mutation without weights:
The confidence-based weights are calculated with:
Here is the mutation with those weights, starting, of course, from the same random state:
We can add an option to the sequenceSimulatedAnnealing function to use confidence-based weights (code changes highlighted in blue):
The time course of the fitness for the weighted optimization is roughly the same as for the unweighted. It’s not immediately obvious if the difference in fitness at the end is significant:
Surprisingly, substantially fewer mutations met the Metropolis criterion:
And that most likely explains the lower fitness at the end. There is a hint of more frequent mutation at the C-terminal end of the sequence (residue 100) in the plot on the right. This is due to the characteristically lower confidence in the residue geometries at the ends of the sequence. In a different 10,000-step optimization, it was much more obvious:
There is almost no sequence similarity between the two final sequences:
And, happily, the folded sequence has a different topology:
Again, the public protein databases are devoid of similar sequences:
Four hits were found in the UniProt database, with the best, although not very good with the E-score= 3.4, being:
Summary
Well, what have we learned on our computational expedition? I think foremost is that surprisingly little coding was necessary. We created a few “one-liners” (confidence, acceptableSequenceQ, randomSequence, residueConfidence, mutate) and only one big function (sequenceSimulatedAnnealing). Everything else that we needed was built into Wolfram Language and just worked.
The ability to start with just a sequence of amino acid codes (1° structure) and in one step obtain a realistic three-dimensional protein structure (3° structure) is utterly amazing and deeply satisfying. The advent of LLMs is truly worthy of a Nobel Prize, and that we can easily climb onto the shoulders of those giants is breathtaking.
We also learned that experimental data can be challenging to use. Doing good science requires attention to detail and frequently asking why a particular result was obtained. As we went along, we postulated hypotheses and tested them.
We’ve only just scratched the surface of computational biology, and Wolfram Language will allow us to go much further.
Ideas for Further Exploration
One area for fruitful exploration is the correlation of amino acid properties with the optimized folds. Where are the polar and nonpolar residues located three-dimensionally? What about charged residues, such as arginine, histidine, aspartate and glutamate?
What is the effect of increasing or decreasing the rate of cooling? We set the temperature half-life to be 1/8 of the number of iterations, as was used by Baker’s group. However, we used a continuous protocol while they used a stepwise protocol.
Are there other optimization strategies that might be more efficient? We’ve already seen that weighting the residue position selection by 1 residueConfidence increases the sampling of the less-well-defined regions of the chain. Is there a weighting by amino acid that could be exploited? What would be the effect of giving a preference to some amino acids over others? For example, there is a class of proteins known as glycine-rich proteins that contains more than 60% glycine residues and are found in tissues of many eukaryotic organisms.
Many proteins contain disulfide bridges between cysteine residues. How could this feature be incorporated into the random sequence generation and subsequent mutation? Can ESMAtlas fold sequences with this topological constraint?
What other optimization goals could one use? We optimized the confidence of the fold. How could you optimize for a particular shape or combination of helices and sheets? How could you optimize an enzyme active site or a receptor binding pocket?
Acknowledgments
Special thanks are due for Jason Biggs of the Wolfram Chemistry Team for useful discussions, quick bug fixes and solid code design and development for the new BioMolecule framework. Soutick Saha of the Chemistry Team has also been helpful guiding my sometimes wayward steps through the plethora of online protein and bioinformatics resources, and he made several suggestions to improve this post. Jon McLoone made some improvements to the MarginalPlot resource function that gave me better control over the histograms. | ↑ |
4. Master the Basics of Laplace Transforms in Just 15 Lessons with Wolfram LanguageСр, 05 фев[-/+]Категория(?) Автор(?)
The Laplace transform provides effective and easy means for solving many problems that arise in the fields of science and engineering. It is one of the main tools available for solving differential equations. For most of us, the first time we see it is in an introductory differential equations course.
Wolfram Language provides an ideal environment for studying this subject, thanks to its built-in computational capabilities, both symbolic and numerical, as well as its powerful visualization tools.
Today, I am excited to announce a free interactive course, Introduction to Laplace Transforms, that will help students all over the world to master this subject.
The course is a basic introduction to the subject and it has three parts: Laplace transforms, inverse Laplace transforms and applications. It is intended for science, technology, engineering and math majors; teachers and professors looking for different ways of presenting Laplace transforms to their students; and anyone who wants to learn about Laplace transforms using Wolfram Language.
Clicking on the image below, which links to the course, lets you explore its content.
Historical Background
The Laplace transform is named after French scholar Pierre-Simon Laplace, who employed a similar transform in his investigations of probability theory (1814). Many other mathematicians made significant contributions to the theory of Laplace transforms, including Leonhard Euler, Joseph-Louis Lagrange, Salvatore Pincherle, Henri Poincare, Oliver Heaviside, Thomas Bromwich and Gustav Doetsch. They conducted research on, developed and extended the Laplace transform.
Overview
Students taking the course will receive an introduction to the Laplace transform starting from the basic definition as an integral and its elementary properties, Laplace transforms of different types of functions, numerical approximations, inverse Laplace transforms and their properties, inverse Laplace transforms of different types of functions and complex inversion and numerical approximations for the inverse transform.
A final section consisting of applications to ODEs, PDEs, fractional calculus, sums and integrals hopes to inspire students to further explore the subject.
The course framework can be viewed in the following image.
The course consists of lessons, exercises and quizzes designed to help you master all the fundamentals of this subject.
Necessary mathematical prerequisites for the course are exposure to single-variable calculus and differential equations.
Let’s see in more detail what the course looks like.
Lessons
The course is organized into 15 lessons. Each lesson consists of a video and its written transcript.
The first lesson, “What Is a Laplace Transform?”, is a historical introduction to the topic and shows the first simple calculations of Laplace transforms.
All lessons contain numerous solved examples, often illustrating the use of Wolfram Language code and functionality.
Lesson videos range from 6–20 minutes in length and are accompanied by a transcript notebook. These notebooks can either be downloaded or viewed in the browser. Students can experiment with them and try the examples in a scratch notebook directly in the browser on the same webpage as the video.
Exercises
Each lesson contains five or more exercises that review the material covered in the lesson. The solutions are provided, most of the time in the form of Wolfram Language code. Exercises are a key component of the learning experience as they enhance the material covered in each lesson.
For example, below is an exercise from Lesson 10.
Students can experiment with Wolfram Language notebooks and try variations of the exercises or adapt the code to their own explorations.
Quizzes
The 15 lessons of the course are grouped into three sections. Each section ends with a quiz with 10 multiple-choice problems reviewing the material contained in the section. The quiz is intended to help students with questions that are similar to the exercises and provide feedback on answers.
Students are encouraged to use any method to solve the quiz problems, whether by hand or using Wolfram Language. A scratch notebook is provided for that purpose on the right-hand side of the quiz webpages.
Course Review
One of the difficulties with studying a subject like Laplace transforms is that a vast number of new concepts and theorems needs to be mastered in a short period of time. To help you in reviewing the course material, the course concludes with a final review lesson titled “Laplace Transforms in a Nutshell.”
Course Certificate
Students who finish the course and pass all the quizzes can get a certificate of completion.
A final exam is also available at the end of the course. Passing it entitles the student to a Level 1 certification for proficiency in Laplace transforms. It’s easy to track which videos you’ve completed and the status of your quizzes and exam by using the “Track My Progress” section of the course. Your shareable certificates are automatically generated and immediately available to you upon completing the requirements.
A Building Block for STEM Success
A thorough understanding of Laplace transforms is highly desirable for students not only in mathematics but also in physics and engineering. This course aims to help students master the basics of Laplace transforms and to provide a solid foundation for their further studies.
Acknowledgements
I would like to thank Hrachya Khachatryan, Devendra Kapadia, Anisha Basil, Joyce Tracewell, Cassidy Hinkle, Adam Bramowicz, Bob Owens, Jay Warendorff, Tim Shedelbower, Naoko Glowicki, Jamie Peterson, Lori Goodman, Laura Millar and Mariel Laugesen for their work on various aspects of the course.
| ↑ |
5. Launching Version 14.2 of Wolfram Language & Mathematica: Big Data Meets Computation & AIЧт, 23 янв[-/+]Категория(?) Автор(?) The Drumbeat of Releases Continues…
Just under six months ago (176 days ago, to be precise) we released Version 14.1. Today I’m pleased to announce that we’re releasing Version 14.2, delivering the latest from our R&D pipeline.
This is an exciting time for our technology, both in terms of what we’re now able to implement, and in terms of how our technology is now being used in the world at large. A notable feature of these times is the increasing use of Wolfram Language not only by humans, but also by AIs. And it’s very nice to see that all the effort we’ve put into consistent language design, implementation and documentation over the years is now paying dividends in making Wolfram Language uniquely valuable as a tool for AIs—complementing their own intrinsic capabilities. | ↑ |
6. Using AI for Thematic Analysis: Analyzing Coroner Reports with LLMsВт, 21 янв[-/+]Категория(?) Автор(?) In the United Kingdom, Prevention of Future Deaths forms (PFDs) play a crucial role in ensuring public safety. This is a special type of coroner report that documents more than just the circumstances of an individual’s death. PFDs are issued when a coroner investigates a death and rules that a specific risk or systemic failure—deemed as preventable—has played a significant role in said death.
While these forms do have a structure in so much as they each have sections that must be filled out by coroners, these sections are filled out by coroners in natural language, making analysis of these forms (until now) very time consuming, with each report having to be read by a human.
Wolfram Language’s extensive list of built-in functions allows calls to various different large language models (LLMs) to be made from inside the Wolfram kernel. Implementing LLMs inside of Wolfram means that extracting unstructured data, such as the contents of a coroner report, is conducted in a fraction of the time. We can then use Wolfram’s data analysis tools to process what we’ve gathered.
Collecting the Data
The UK Courts and Tribunals Judiciary posts a sample of these PFDs on their website. Unfortunately, they don’t have a public API for accessing these files, meaning the only way to view the files is by visiting the page and finding each file. This would take a very long time to do by hand, so we’ll need to make a web scraper to go through and automatically download the PFDs:
Note: all data is taken from the Courts and Tribunal Judiciary Prevention of Future Death Reports under Open Government Licence v3 .0 .
Let’s test that this code works by getting the first two pages of links:
Brilliant! Now let’s use it to pull from more pages:
Let’s now import all of them to get the text of the document:
Data Extraction
With the data now collected, an interesting application is to review the length of these investigations plotted over time. The traditional way to do this would be to have someone read all of these reports and manually input the start and end dates of an investigation into a spreadsheet. This sounds very time consuming (and boring). LLMs can be extremely helpful here, having enough knowledge to be able to read the report and extract just the two dates, while taking nowhere near as long as a human would.
One drawback of using LLMs is that a lot of prompting often has to go into them to constrain their behavior. With imprecise or vaguely worded prompting, the LLM often ends up being very unhelpful and produces unexpected results. Thankfully, Wolfram has a good way of combatting this drawback. LLMExampleFunction not only takes standard prompting as an argument, but also allows you to pass in a list of examples for the LLM to follow:
Some examples in this post rely on a large language model (LLM) and require an API key.
This piece of code uses LLMExampleFunction to create a function that will take imported PDFs as input and will give a list containing the start and end dates of the investigations:
A timeline plot of a random sample of the results shows that it returned what was expected (each line on the plot represents an investigation):
Real-World Applications
Previous academic research from Alison Leary et al. has investigated the main areas of concern that coroners express in their reports. Here, we use the resulting categories of that research and apply it to our own data. With that, we are able to combine previous insights from academia, the computational power of Wolfram Language and the fluency of LLMs to gather insights on a much larger corpus of data:
Categorization Code
We then list the main concerns identified by Leary, we pass those concerns to an LLMFunction and we prompt the language model to apply the categories to each file in the dataset:
Plotting
By plotting each category for each year that we have reports for in a bar chart, we can see the most common PFDs:
A stacked bar chart is an alternative way of visualizing the same data that allows us to focus on the proportion of each category within each year. While roughly consistent across the years, we can spot some temporal trends, for example, the peak in communication issues in 2020. The bar for 2024 is much smaller since the data was collected in the summer of 2024, when most reports for that year hadn t been submitted yet:
To visualize the current trends better, we can make the stacked bar chart proportional to 100% of the year’s concerns. In that, we see that communication issues are on track to have a higher proportion of the total share compared to previous years, potentially reaching the levels that they had in 2020:
Interestingly, these results mostly mirror the ones found in Leary’s work. This suggests that employing LLMs in the initial stages of tasks that aim to extract insights from natural language—such as thematic analyses—can be a valuable first step in getting meaning out of unstructured data. That is likely to be especially true in cases where broad categories have already been defined by previous works, and these definitions can be passed down as instructions to the LLMs.
Going Forward
Using Wolfram tech, we can quickly gather and prepare data to spend more time making analyses and finding solutions to improve practices in the future. For extra help learning to computationalize your workflow, be sure to check out the new Wolfram Notebook Assistant! | ↑ |
8. Thanksgiving Day the Wolfram WayПн, 18 ноя 2024[-/+]Категория(?) Автор(?) The holiday season is almost here. It’s a good time to look at the fun and informative ways Wolfram Language can contribute to your holiday meal planning. We are focusing here on Thanksgiving dinner, but these are useful tools for any holiday or family event that involves food!
Planning Your Menu with Nutrition Analysis
New resource functions in the Wolfram Function Repository make it easy to view the nutritional values of a classic Thanksgiving dinner. These values can simplify the process of planning a nutritious meal that accounts for certain restrictions your guests may have.
Calories and Macronutrients
NutrientComparisonBarChart shows that turkey breast is leaner than turkey leg and thigh meat with its lower calories and fat content. Turkey breast also provides higher protein. Not surprisingly, stuffing is the top contributor to calories and carbohydrates per gram, but it is so worth it:
NutritionReport gives a breakdown of calories and macronutrients for each food and then the totals for the whole meal. If you’re counting calories or carbohydrates, this is a good way to plan ahead:
Vitamins and Minerals
NutritionLabelData provides the percent of recommended daily value of vitamins and minerals in a 4 oz serving of cooked turkey breast. Turkey is high in niacin and vitamin B6, which are important for energy metabolism and neurological health:
Amino Acids
AminoAcidsBarChart and EssentialAminoAcidsChart are easy ways to compare the “building blocks of protein” in turkey versus other poultry. It was surprising to see goose take the lead in many of the amino acids, including tryptophan, the sleep-inducing amino acid usually associated with turkey:
Essential amino acids are the nine amino acids that our bodies cannot synthesize and must be obtained from our diets. Turkey squeaked ahead in lysine and methionine, necessary for protein synthesis, calcium absorption, hormone production, tissue growth and detoxification:
Fatty Acids
Saturated fatty acids (SFAs) in turkey meat include palmitic acid, the most prevalent saturated fatty acid in most diets, as well as stearic acid. Unlike other saturated fats, stearic acid has a neutral effect on blood cholesterol levels because it does not raise low-density lipoprotein (LDL), often called “bad cholesterol,” in the bloodstream.
The primary monounsaturated fatty acid (MUFA) found in cooked turkey is oleic acid, which supports heart health. However, olive and avocado oils are richer sources of oleic acid.
Turkey meat provides some polyunsaturated fatty acids (PUFAs), including linoleic acid, which is important for heart and skin health, as well as regulating inflammation.
Turkey is not a significant source of omega-3 fatty acids like eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), which are vital for heart and brain health and keeping inflammation in check. Recommended sources of omega-3 fatty acids are fatty fish, like salmon, mackerel and sardines, as well as walnuts and flaxseed:
Managing a Busy Kitchen during the Holidays
Preparing a multi-course meal for a large group can be overwhelming! Recipe graphs and timelines are great tools to help organize the meal preparation and cooking process during the holidays.
Recipe Graph
Some examples in this post rely on a large language model (LLM) and require an API key.
If you find flowcharts more intuitive than step-by-step instructions, use RecipeGraph to create a directed graph of your favorite Thanksgiving recipe. The large language model (LLM) can even write a recipe for you:
Timeline of Two Recipes
Preparing multiple recipes at once can be a challenge for even the most efficient cooks, especially during the holidays. Working together, an LLM and Wolfram Language can build a single timeline for the recipes. The timeline is especially useful for knowing when to begin preparing ingredients. In this example of potato soup and spinach dip, I remind the LLM that I am a home cook, not a professional chef. The timeline shows that I need to begin my prep work 65 minutes before serving:
Safety First
Nobody wants foodborne pathogens to be the uninvited guests at Thanksgiving dinner. Here are several ways to help prevent food safety hazards during the holidays.
Turkey Cooking Time
Calculate the cooking time based on your turkey’s weight using natural language in Wolfram|Alpha:
After cooking, use a food thermometer to make sure the turkey has reached a minimum internal temperature of 165° F (74° C). To check, insert the food thermometer into the thickest part of the breast, thigh and wing. If you cooked stuffing inside the turkey, check the temperature at the center of the stuffing to ensure it also has reached 165° F. For more information, visit www.foodsafety.gov.
Minimum Internal Temperatures
With FoodSafetyExplorer , you can review the minimum internal temperatures for a range of cooked foods:
To Brine or Not to Brine
Brine is a solution of water and salt. The salt in brine dissolves some of the protein in the turkey’s muscle fibers, which can reduce moisture loss during cooking. If you plan to brine your turkey, Wolfram Language can calculate the mixture for you. A basic turkey brine recipe is four quarts of water to 240 grams of kosher salt. Let’s define a function to calculate how much salt is needed to create any volume of brine:
Test that it gives what we expect:
Then use it for any volume:
Do not brine any longer than two days, and always keep the turkey and brine refrigerated at 40° F or lower. Discard the brine mixture afterward. Do not reuse it. Visit the USDA to learn more about how to brine a turkey.
Recipe Risk Analysis
With RecipeRiskAnalysis , the LLM can help identify critical points in a recipe that may introduce food safety hazards, such as the risks highlighted in this recipe for deviled eggs:
Safe Storage of Leftovers
Once you’re stuffed and ready to call it a day, it’s time to clean up! Using the maximum cold storage times in Wolfram Language, you can decide whether to refrigerate or freeze those Thanksgiving leftovers:
Happy Thanksgiving!
To our Wolfram community, we are thankful year-round for your creativity and passion for innovation. We wish you a Thanksgiving season filled with joy, good food and great computations! | ↑ |
9. Announcing the Winners of the 2024 One-Liner CompetitionСр, 30 окт 2024[-/+]Категория(?) Автор(?) The 2024 Wolfram Technology Conference has ended, and we sent it off with our annual One-Liner Competition! Each year, participants are challenged to show off their Wolfram Language skills in this contest of brevity and creativity by using only 140 or fewer characters to share the most incredible and original output without using 2D typesetting constructs or pulling in linked data.
Entries from conference participants were judged anonymously by Wolfram staff. Judging criteria included aesthetics, understanding of the output and original use of Wolfram Language. Please note that entrants may have written one-liners on different versions of Wolfram. While our judges were able to verify each entry listed was fully functional, there may be errors generated in reproducing inputs based on your version.
Curious Mentions
This year, judges were so surprised by two entries that it was decided to add a “Curious Mentions” category for these amusing takes on the challenge.
James Wiles: Craft a 1st-Place One-Liner Competition Entry (140 characters)
James Wiles’s submission took a tongue-in-cheek approach by writing a one-liner (in exactly 140 characters) asking Wolfram’s LLMFunction to generate a “1st-place one-liner competition entry”:
Arnoud Buzing: Complicate the Code (130 characters)
Arnoud Buzing also decided to utilize Wolfram’s LLM functionality. Rather than asking for a short-and-sweet one-liner, Buzing opted to use LLMSynthesize to expand and over-complicate an initial piece of code (being “42” in this example), in 130 characters, to generate an output well over 140 characters. Judges found this inversion of the original challenge to be amusing and worthy of a curious mention:
Third Place
Nik Murzin: Face the Camera and Smile! (140 characters)
Nik Murzin’s interactive one-liner had judges striking poses in front of their web cameras! At exactly 140 characters, Face the Camera and Smile! uses two variants (text and image) of the CLIP feature extractor network to match your facial expression to the most similar emoji:
Second Place
Catalin Popescu: TWBI or Not TWBI (140 characters)
Catalin Popescu, who was the first-place winner of the 2023 One-Liner Competition, pulls in a documentation example animating a skull and combines it with synthesized speech for a clever short form of “to be or not to be” (twbi || ! twbi):
Judges were excited at the possibilities presented and took the opportunity to try out alternatives with “twbe” and “twdi”:
First Place
Michael Sollami: StoryBookVideo (140 characters), TextAdventure (140 characters)
Michael Sollami, who was also the first- and second-place winner of the 2021 One-Liner Competition, wowed the judges with two entries this year.
StoryBookVideo utilized LLM synthesis to generate an eight-line story for children with accompanying visuals and narration:
TextAdventure generates an “choose-your-own-adventure” game featuring a day in the life of a randomly generated species using Wolfram’s Chat Notebook function:
Bonus Mentions
Andreas Hafver: Prismatic Polygons (135 characters)
Andreas Hafver’s submission created a bright and beautiful kaleidoscopic effect made of triangles in just 135 characters:
Zsombor Meder: Piano with PeanoCurve (138 characters)
Zsombor Meder’s one-liner produces a Baroque-sounding piano piece with a twist—the piece is composed using PeanoCurve , resulting in an amusing turn of phrase with Meder’s Peano piano:
Alejandra Ortiz: Angelic Visualizations (139 characters)
Alejandra Ortiz submitted a function using 139 characters that produced a stunning visualization that reminded judges of a kind of celestial throne:
Tommy Peters: Continuous Line Art for 3D Printing (139 characters)
Tommy Peters’s submission presents a function using ImageSynthesize that converts an image to a continuous line icon for 3D printing. Peters shared the example he had produced in testing of a continuous mushroom design:
Daniel Carvalho: Visualizing Flight Data (138 characters)
Daniel Carvalho presented a handy tool for frequent flyers in just 138 characters. Carvalho’s submission visualized flight data for specific airports using the recently updated functionality that allows users to access built-in Entity objects by pressing Ctrl + = and typing the desired entity in the input box ( = [CMI] here):
Congratulations to the winners of the 2024 One-Liner Competition! Have more one-liners to share? Be sure to share them, and any other stunning projects, on Wolfram Community! Медиа:1. video / mp4 2. video / quicktime 3. video / quicktime 4. video / quicktime 5. video / quicktime 6. video / quicktime 7. video / mp4 8. video / quicktime | ↑ |
Powered by
| |