[admin post] Admin Post: Community Check-In for January 2026

Jan. 31st, 2026 10:07 pm
goss: Rainbow - Pencils (Rainbow - Pencils)
[personal profile] goss posting in [community profile] drawesome

Drawesome Monthly Check-In Post

It's the last day of January, and we'd love to have you check in and chat with us. How have things been with you this past month?

Did you sign up for or take part in any fandom activities in January, or have you been working on any personal art projects? Are you currently trying to meet a deadline? Feel free to share upcoming art challenges that have got you excited, any frustrations you've been experiencing, possible goals for the next month, and so on.

Sunday Word: Demesne

Feb. 1st, 2026 12:12 pm
sallymn: (words 6)
[personal profile] sallymn posting in [community profile] 1word1day

demesne [dih-meyn, -meen]

noun:
1 possession of land as one's own
2 an estate or part of an estate occupied and controlled by, and worked for the exclusive use of, the owner
3 land belonging to and adjoining a manor house; estate
4 the dominion or territory of a sovereign or state; domain
5 a district; region

Examples:

A couple of centuries or so later, the peninsula became part of a Spanish land grant, and the demesne of Manuel Dominguez as his Rancho San Pedro. (Patt Morrison, Palos Verdes Peninsula landslides can tell us a lot about L A history, Los Angeles Times, May 2024)

In Loki, the titular character finds himself in the bizarre (almost Brazil style) demesne of the Time Keepers, an organization devoted to ensuring the sanctity of the timeline. (Erik Kain, Owen Wilson And Tom Hiddleston Light Up First 'Loki' Disney Plus Trailer, Forbes, April 2021)

The castle or manor-house of the baron or lord, into which the thegn’s hall had now developed, was the centre of rural life. Around it lay the home-farm, the lord’s demesne land, cultivated partly by free tenants, partly by the customary labour due from the villeins whose cottages clustered on its border, and whose holdings, with a tract of common pasture and common woodland, made up the remainder of the estate. (Kate Norgate, England Under the Angevin Kings)

However, as he pursued his wayfaring with the two Armenian Christians who formed his retinue, he began to hear from the inhabitants of that portion of Abchaz the rumor of an equally dread demesne, named Antchar, lying before him on the road to Georgia. (Clark Ashton Smith, 'The Kingdom of the Worm')

After winding along it for more than a mile, they reached their own house. A small green court was the whole of its demesne in front; and a neat wicket gate admitted them into it. (Jane Austen, Sense and Sensibility)


(click to enlarge)

Origin:
c. 1300, demeine, demeyne (modern spelling by late 15c), 'power; dominion; control, possession,' senses now obsolete, from Anglo-French demesne, demeine, Old French demaine 'land held for a lord's own use,' from Latin dominicus 'belonging to a master,' from dominus 'lord, master,' from domus 'house' (from PIE root dem- 'house, household'). Re-spelled by Anglo-French legal scribes under influence of Old French mesnie 'household' (and the concept of a demesne as 'land attached to a mansion') and their fondness for inserting -s- before -n-. Meaning 'a manor house and near or adjacent land,' kept and occupied by the lord and his family, is from late 14c, hence 'any landed estate' (late 14c) (Online Etymology Dictionary)

Why isn't 'demesne' pronounced the way it's spelled? Our word actually began as demayn or demeyn in the 14th century, when it was borrowed from Anglo-French property law. At that time, the Anglo-French form was demeine. Later, the Anglo-French spelling changed to demesne, perhaps by association with another term from Anglo-French property law: mesne, meaning 'intermediate.' (Mesne has entered English as a legal term as well.) According to rules of French pronunciation, the 's' was silent and the vowel was long. English speakers eventually followed suit, adopting the 'demesne' spelling. Our word domain (which overlaps with the meaning of 'demesne' in some applications) also comes from Anglo-French demeine. (Merriam-Webster)

but_can_i_be_trusted: (My Boys)
[personal profile] but_can_i_be_trusted posting in [community profile] vocab_drabbles
Title: 'Panacea'
Fandom: Original Poetry
Author: [personal profile] but_can_i_be_trusted
Rating: G
Word Count: 75
Characters/Pairings: Original
Warnings: None
Notes: Using Challenge #175: Specious, Challenge #176: Verboten, and Challenge #180: Panacaea
Summary: When there seems nowhere to turn

Panacea )

Daily Check In.

Jan. 31st, 2026 05:44 pm
adafrog: (Default)
[personal profile] adafrog posting in [community profile] fandom_checkin
This is your check-in post for today. The poll will be open from midnight Universal or Zulu Time (8pm Eastern Time) on Saturday to midnight on Sunday (8pm Eastern Time).


Poll #34155 Daily poll
This poll is closed.
Open to: Access List, detailed results viewable to: Access List, participants: 29

How are you doing?

I am okay
16 (57.1%)

I am not okay, but don't need help right now
11 (39.3%)

I could use some help.
1 (3.6%)

How many other humans are you living with?

I am living single
10 (34.5%)

One other person
13 (44.8%)

More than one other person
6 (20.7%)




Please, talk about how things are going for you in the comments, ask for advice or help if you need it, or just discuss whatever you feel like.

Book completed

Jan. 31st, 2026 03:23 pm
eve_prime: (Default)
[personal profile] eve_prime
Contrarian, by L.E. Modesitt, Jr. Grand Illusion #3. These books are meant for readers who take pleasure in following the characters from day to day, as matters slowly develop – it’s a type of immersion based in realism, and I really like it, but it’s not a conventional writing style. (Do you want to know which lunch he chooses in the cafeteria each day? Will it be the three-cheese chicken, the onion soup, or something else?)

In this one, Dekkard gets the premier to let him hold a proper investigation of the major violent crimes that took place in the previous two books, which were all instances of corporate corruption. (The Commercer party had controlled the government for decades, but now it’s the Crafters, like Dekkard.) His diplomatic way of talking with the premier and with his political opponents is worth study. The “contrarian” angle, where he pretends to create a movement to rival the extremists, isn’t actually a big part of the story. When the book came out, we didn’t yet know that there would be a sequel, but now we do – I hope I can keep it all fresh in my memory until November!

How LLMs Keep on Getting Better

Jan. 31st, 2026 08:47 pm
[syndicated profile] probablydance_feed

Posted by Malte Skarupke

If you look at the source code of a modern open source LLM, it looks very similar to the transformer described in the “Attention is all you need” paper from 2017. It’s just a stack of exactly three components: attention blocks, matmuls, and norm layers. The big algorithmic changes, like Mamba 2 or linear attention variants, aren’t really used yet. But look closer and almost everything has changed in the details.

The story of how LLMs keep on getting better is one of pushing for big and little improvements in a hundred different directions. Turns out hill climbing can get you to a really good place if you just climb along enough dimensions. This makes it hard to notice changes as they’re happening because they’re so small, so lets look at the last two years and see how many small changes there were to add up to the big improvements we saw.

Big Visible Changes

  • models now “think” before giving an answer
  • models use “tools” like web search or writing Python programs
  • models have much longer context window
  • the scaffolding around models is better (e.g. Claude code or “deep research”)
  • models understand images and generate them

Big Invisible Changes

  • Mixture of Experts – Run giant models but only use a fraction for each token
  • Better GPUs – More memory and faster, especially at lower precision
  • Better data – people curate their training data much more now

The main point of this blog post is that we many, many small improvements, so it’ll necessarily be long and shallow to go through it all:

Thinking Models

Models can now expend tokens to think out loud, which improves their answer in the end. This doesn’t look that complicated when you use it, but it required adding a new training phase of “reinforcement learning” which feels a bit more like traditional AI than neural networks do. You no longer just propagate a loss to predict the next token, you have to come up with good problems that make the network learn to behave the way you want and learn the right behaviors. I know very little about it. I liked that LLMs were based on text. Less worries about them having wrong objectives and wiping out humanity when all they do is predict the next token. But this reinforcement learning sure makes them better, e.g. at coding.

RLHF was a precursor, then OpenAI had an existence proof in the form of o1 and then everyone else fast-followed because turns out there were many ways of doing this. Deepseek r1 being the most famous one, and they did make a genuine algorithmic improvement in GRPO. But if you look at the size of the step improvement of GRPO over PPO (which came out in 2017) it really isn’t a large change. That’ll be a theme. A lot of this is down to finding good problems to train on, which we’ll also see in the “better data” section below.

Tool Use

Two years ago we were talking about emerging abilities as model scale up. Then we just started giving them more abilities directly. LLMs started using tools like “web search”. And instead of trying to do math in token-space they just write little Python programs and run them for you. These allow the LLMs to compensate for their weak spots. Instead of having to make up next tokens for answers it doesn’t know, it can google that for you. And Python is just better at math than LLMs are, so they no longer make basic mistakes.

Longer Context Windows

So many changes led to this. Remember that Llama 3 had a context length of 8192 tokens. And then Llama 3.1 had a context length of 128k tokens. That particular one was mostly better understanding of how to scale up RoPE. But there were also new extensions like YaRN. And then newer models have even longer context lengths. For a while it seemed like all the big labs were releasing one paper after another on how to get a million token context window. You also get small differences like how Deepseek applies its position embedding to only part of the query and key vectors (and leaves the rest without position embedding) or how GPT-OSS alternates between layers with small sliding windows and layers with full attention. Just different people trying different things.

And when you do run out of the long context of these models, they can now compact it and you can keep going. Which in practice just means summarizing the important bits and discarding the details. Unfortunately not much has been published on the details.

Train Using More GPUs

One problem with the long context window is that during training you just can’t fit all the activations into GPU memory. So people got really into splitting the training across as many GPUs as possible. This isn’t new, but there were dozens of little and big inventions for this, like Ring Attention and fused matmul/networking kernels.

Google released the Jax Scaling book with lots of techniques, Huggingface did their own take on this with the Ultrascale Playbook. The latter says “Reading Time: 2-4 days” which is optimistic. And after reading that you will still only have a surface-level understanding of what it says. This stuff is really difficult and you’ll tank performance a few times by e.g. sharding FSDP across too many GPUs before getting it right.

KV Cache Memory Improvements

The long context length is still a big memory problem so models found other ways to save memory. GQA is an easy way to decrease the KV-cache size. Deepseek went more aggressive with MLA. PagedAttention helps with inference. And of course people compressed their KV caches:

Smaller Data Types

Another way to save memory is to use smaller data types. Instead of float32 use bfloat16. Instead of bfloat16 use float8, or why not just use FP4? We got both good hardware support for smaller data types and also algorithmic improvements (still happening) to make models robust to the loss of precision. I mean FP4 is a crazy data type in that I can enumerate all the possible values: 0, 0.5, 1, 1.5, 2, 3, 4, 6 (plus the same numbers negative). It’s really a testament to how robust neural networks have gotten that this works at all. Ten years ago neural networks were unstable by default and you had to try many seeds to get anything working (remember that we didn’t even know how to properly initialize linear layers until 2015) and now they’re so robust that you can throw crazy low-precision data types at them and they still work. GPT-OSS uses FP4. Most of the stability improvements were not in the last two years, but the smaller data types were. You see considerations for which data type to use all over the big papers, e.g. Deepseek thought very carefully about this.

Better Hardware

We also got better hardware. B200s gave us very fast FP4 performance. But mostly we got more memory. The H100 had 80GB of memory, the H200 has 140GB, the B200 has 180GB and the B300 has 280GB. Look at my sections above for why people want this. (also as an aside, the PagedAttention paper I linked above talks about using an A100 with 40GB of memory. That seems so small now, just over two years later…)

And then everyone started using TPUs, hardware that was built specifically for neural networks. This is less of a big deal than you’d think because Nvidia GPUs are now also mostly neural network machines, but it did make things cheaper than if there had been no competition.

Also networking got faster. And Nvidia released the NVL72 which is 72 GPUs connected together with really fast networking, to make all these many-GPU training jobs run better. This again required lots of little improvements to take advantage of, and to run robustly.

More Efficient Algorithms

Flash Attention 3 came out and was better and more complicated. Everyone is anxiously waiting for the FA4 paper.

At the same time matrix multiplication became even more crazy. Since these GPUs are now mostly giant matmul machines, you’d think that it would be easy to make them do a matrix multiplication. But no, a fast matmul requires crazy code and it’s still improving all the time.

And then of course you have to fuse that with networking now so that while your matmul works on the next block, the same kernel can do networking with all the other GPUs in your cluster to combine the results of the previous block with a results from a different GPU. Because it’s not optimal to do a matmul and to then do networking, like we did two years ago. You want to do both at the same time.

Also megakernels are maybe a thing now? I haven’t seen them used in open-source models yet.

Luckily torch.compile also became good in the last two years. Often you can write reasonable code and the compiler will turn it into efficient code. Which at least makes it easier to try out the latest papers.

Mixture of Experts

Another thing you can do is just not run the whole model for every token. E.g. in GPT-OSS 120B you actually only have active 5B parameters for each token. The matmuls are split into “experts” and you only do a subset for each token, decided at runtime. This sounds easy but required algorithmic improvements to work at training time. Backpropagation alone won’t do any more, you need to encourage the model to use all the experts at training time. Also we saw lots of experimentation with hyper parameters, like how many experts, what fraction of experts is active (usual numbers are 3% in Kimi K2 to 25% in Grok), whether there are shared experts and how many, how exactly the routing works… And obviously there had to be algorithmic improvements to make this efficient at runtime, which is still very much ongoing.

Larger Tokenizers

The vocabulary size of these models keeps on going up. Apparently that makes them better somehow. Llama 2 had 32k tokens in its vocabulary, Llama 3 had 128k, GPT-OSS has 201k. This means the embedding layer and the un-embedding layer is a significant fraction of the active 5B params in that model. The hidden dimension of GPT-OSS is 2880, and 201k*2880 = 580m parameters in the embedding and unembedding layers, for a combined total of 1.16B. Meaning more than 20% of the active params are just to go from token indices to hidden dimension and back.

Slower Scaling

Models are not getting bigger at the same speed any more as they used to. Deepseek V3 came out a year ago with 671B total params, out of which 37B are active for each token, and Kimi K2.5 has 1T total params out of which 32B are active for each token. Gone are the days where the number of params multiplies by 10. And even then the big models are MoE now. I don’t think anyone has gone bigger than Llama 3’s 405B active params, and that came out 1.5 years ago.

Since we can train on very large numbers of GPUs now, each of which has enormous amounts of memory, I don’t think the limit here is ability any more. (like it would have been two years ago) Everyone can figure out how to train giant models now. I’d guess the limits are given by diminishing returns, and by high hardware prices.

Distilling Models

One way that models actually got smaller is through distillation. We saw this with Claude Opus and Sonnet. Anthropic trained a really big model, Opus, and then trained a smaller model, Sonnet, to imitate it. This makes the models cheaper and faster to run while only losing a little bit of quality.

Attention Sinks

Attention always had weird effects where the model seemed to pay a lot of attention to the first token in the sequence. Eventually the theory for this became that this happens when there are no important tokens, so the first token acts as a “sink” when nothing needs to be attended to. Recently people added explicit sinks to their attention layers (GPT-OSS) which act as a threshold for the softmax in attention. Meaning if nothing gets enough weight, the sink will zero out all the attention scores. And Qwen noticed that you can get the same benefits by putting one more gate after attention. Apparently this just makes the model straight-up better along all dimensions at the cost of minimal extra compute because the model has to compensate for less weirdness.

Better Data

The Olmo papers are always great, and you can perfectly see how better data became a focus. OLMo 2 talked about various architectural decisions, algorithmic improvements, training stability, and yes, also data. But read Olmo 3 in comparison and it’s all about training data. Once again dozens of improvements. Details about gathering, deduplicating, filtering, deciding the order… And then the whole thing again for reinforcement learning problems plus iterating on what problems work… Reading all these many pages on data quality makes me think that this must cause a big difference between other models, too. (Claude and Gemini come to mind)

Synthetic Data

Turns out you can use LLMs to generate training data for other LLMs. This is most obvious for reinforcement learning problems where you need to generate lots of problems. There were some early papers about how synthetic data is really bad, and then more work made it not so. The tl;dr version of it seems to be “keep on iterating on the synthetic data until it’s really good.”

Better Optimizers

When you train a model you have to use your loss-gradients to update the model somehow. This is the job of the “optimizer”. We got the first good optimizers ten years ago and they’re one of the big reasons why neural networks started getting good then. Right now we have a second phase of getting better optimizers. Apparently people are now speedrunning training of LLMs to a certain quality. What took 45 minutes two years ago now takes under 2 minutes. (half of this is due to better optimizers) If you can train a model to a good quality faster, it will end up at a better quality overall by the end of the training.

Learning Rate Schedules

This is a surprising point in that you’d have thought that we figured out what learning rates to use ten years ago. But almost every paper now talks about their learning rate schedules and they’re all a little different. These schedules are actually still pretty simple, so I wouldn’t be surprised if we see more improvements here. (this has to co-evolve with the optimizers and data that’s being used)

Better Scaffolding

We got Deep Research and Claude Code. These were enabled by long context windows and tool use and by reinforcement learning, but they also just allow the models to do a better job than the old call and response. Now you can tell a model to do something and it just goes and does it. There was no place for models to do this two years ago.

Big Areas I Can’t Cover

When there are dozens of directions that models improve into, there are some big areas that I can’t cover because I know little about them and because they would be too big on their own:

Better Finetuning

I mentioned RLHF, but I don’t think that is even used any more. Llama uses DPO instead and there have been more papers since. As I mentioned with the “Better Data” point above, recent papers now spend a lot of time talking about how they finetuned the models after pretraining (a term which means “read lots of text and predict the next token in all of it”) is finished. It’s too much to cover.

Multimodal Models

Models can now generate pictures and videos and sounds. I take so many pictures of things now and ask models about them. My impression is that writing about these areas would be twice as long as this whole blog post again. Luckily I know very little about all the improvements that led to that, so I won’t talk about them, but given the pace of improvements of e.g. image generation, it’s clear that they also went through dozens of improvements.

Inference Improvements

People started using speculative decoding, predicting multiple tokens at once (e.g. for the little google search AI snippets where cheap inference is important), and I’ve seen the headlines for various papers about how to better assign requests to hardware to get better batching and caching. I didn’t read any of them.

Summary and Outlook

AI is weird in that the chat interface looks very similar to two years ago, and if you look at a model’s code it looks very similar to two years ago, but in the details everything has been hill climbing in many small improvements to make better models. Does any individual improvement make a big difference? Would models be much worse without e.g. explicit attention sinks? No, but it all adds up. And sometimes enough small enough improvements allow a step change in capabilities, like the longer context did.

More papers come out than anyone can possibly keep up with (even just reading the headlines or the abstracts), and I only looked at the ones that made it into released models and that I remembered. But other areas haven’t stood still, even if no big models use their improvements. State-space models and linear attention have also been hill-climbing. I would not be surprised if they’re better than transformers soon (it would be a classic example of the theory of a cheaper, worse thing disrupting a more expensive, better thing by slowly improving). Or maybe those mixture-of-depths or H-Net approaches get adopted. And for some reason papers keep on coming out about how much better RNNs are getting. There are so many different approaches that you don’t see in LLMs yet, but have a chance of being adopted. When the next big thing comes out, it’ll probably be years in the making.

And of course even within transformers there are dozens more directions to explore. Big ones that come to mind are multiple residual streams, generalized attention, even more aggressive compression to smaller data types, more complicated attention. This architecture is not done improving. Even if every single one of these is a small step, it’ll add up.

I used to think that we need some algorithmic breakthroughs to make LLMs really good and get over their weird flaws. (where they’re really good at many things and then make the stupidest mistakes at other times) Now I think we are at a good enough starting point where we can hill-climb our way out of this. I’d be surprised if we didn’t see some big steps in addition to the many small steps, but I no longer think it’s necessary. The overall pace of improvements has just been so good.

[syndicated profile] atlas_obscura_places_feed

The Bavarian maypole is an age old tradition going back for centuries, originally erected as a symbol for all that grows and bears fruit but now is  a symbol of wealth and pride for the community that sets it up.

The tradition dates back to the 13th century and ever since each community tried to outdo the others by erecting the tallest and straightest maypole. The associations that make them scout for the best trees weeks in advance, fell them and hide it away. The pole is then decorated with Bavarian colors and little signs on each side that denote what the community is proud of. This can be monuments, certain shops and even things like a recent metro station. 

The poles get erected on the first of May during a large spring festival where the pole gets hoisted up by hand to proclaim the greatness of the location. However neighboring communities often try to steal their rivals’ pole, which is not illigal but also part of the tradition. Because if they manage, they can ransom the pole for technically anything they want, usually large quantities of beer and food.

The most amazing maypole theft ever was in 2004 in Zugspitze, where a daring Bavarian stole a 20m maypole with a helicopter, flying it to an Alpine hut where a ransom was set.

Isn't It Punny.....

Jan. 31st, 2026 02:53 pm
disneydream06: (Disney Funny)
[personal profile] disneydream06
Jan 31st/Feb 1st...


Someone Broke Into My

House Last Night And

Stole My Limbo Stick.



How Low Can You Get?

Recent Reading

Jan. 31st, 2026 11:48 am
sanguinity: woodcut by M.C. Escher, "Snakes" (Default)
[personal profile] sanguinity
David Macaulay, Ship (1993)

Lengthy (96 pages!) illustrated for-older-readers children's book detailing an underwater archaeology expedition to investigate the wreck of a fifteenth-century caravel, finishing with a builder's journal documenting the caravel's construction. Lots of information about archaeological planning, research, and methods, followed by a similarly detailed section on historic ship construction. The illustrations and diagrams are as information-rich as the text. (When reading this aloud to [personal profile] grrlpup, I often stopped to elaborate further on some detail in the drawings.) For a fully-illustrated picture book, the reading level is fairly advanced (verbose and with lots of specialized vocabulary), providing lots of opportunity for an older child to nerd out undisturbed. (An older child -- or me!)


Lois McMaster Bujold, The Paladin of Souls (2003)

Immediate sequel to The Curse of Chalion, plus a few years. Our point-of-view character is someone who was mostly dismissed in the first novel for alleged madness -- and in fact, her early motivations are wholly about getting out from under the "protection" of people who think she's mad.

Of course, once she does get out, adventures start being had. And she's mad about it, because she wasn't planning on having adventures, she just wanted to have a nice life being left alone on her own terms. Alas.

Ripping yarn, I liveblogged most of it to [personal profile] phoenixfalls as I read it, things kept snowballing in that classically Bujold way, and much like in The Curse of Chalion we were a good ways into it before figuring out what the larger plot ultimately even was. There were a number of moments that made me laugh out loud. (When she experimentally kisses the literally too-handsome-for-his-own-good guy to see if it will break a spell, and he isn't fazed in the least, just kisses her back as if this happens every so often and he considers it "impolite to duck".) Ista reminds me more than a little bit of Cordelia, and I wouldn't call that a bad thing.


Charlotte McConaghy, Wild Dark Shore (2025) -- DNF

I don't usually post about my DNFs (Did Not Finish), because why bother, but I did read about half of this, and was hugely conflicted.

Did Not Finish )

Anyway, it's a month overdue and four hundred people are waiting for it at the library, and I keep thinking about other books on my tbr list that I want to read but I "have to" read this one first. Boo. I hate it when I can see the book I would have found compelling around the margins of the book the author actually chose to write.

Recipes and stuff

Jan. 31st, 2026 02:38 pm
flamingsword: Knitting needles and yarn (Crafting)
[personal profile] flamingsword
If you know anyone opening a restaurant or bulk ordering their food staples : https://www.webstaurantstore.com/

Recipe I made this afternoon, bc Mom wanted to make a sourdough starter last week so we did: https://www.pantrymama.com/best-banana-sourdough-muffins/ I guesstimated amounts bc we don’t have a kitchen scale, and they came out slightly sweet but Mom and Stepdad love them, so yay!

We’re currently making jambalaya out of leftovers and I just finished the swatch for making the Melt The ICE protest hat for charity. I’m not sure who it will get given to once it’s done, but I’m making one. https://www.ravelry.com/patterns/library/melt-the-ice-hat

January fog.

Jan. 31st, 2026 09:43 am
serafaery: (Default)
[personal profile] serafaery
Josh is coming home today, I am hoping to get the floors swept, mopped, and vacuumed before I have to leave to pick him up. We'll see how I do. I also would love to do PT, bake banana bread, bonus would be to vacuum the couch and brush the cat trees, and to leave early enough to leisurely shop at the coop on the way to the airport. Oh and I need to get gas. And I need to get dressed. Three hours until departure, hmmmmm. :)

It's fine, whatever happens, happens. I've been too stressed out to put a bunch of pressure on myself about this stuff.

There is the whole situation of our government, that's a given stressor.

Also there is my breast, which is still having weird twinges of pain and I'm starting to think that the cosmetic mistake is also physical, and I'm starting to get really angry. How much would I have to pay to fix it, can it even be fixed, how would I ever allow another doctor to ever touch me ever ever again.

I say that, but I also have a third customer now doing chemo for breast cancer, and just reconnected with one this week who was gone for two years having radiation and surgery for her breasts. Sigh. She LOVES her rebuild. Happy for her. :/

Also, Jackie and Shadow (eagles) lost their first clutch of eggs. This isn't unusual but it's still sad, it was really early in the process and very unusual - they have never just abandoned eggs that were less than a week old to let ravens eat them. The group is all in an uproar about it. I could tell immediately that something wasn't right with the first laying, Jackie was not intent on incubating, and later experts confirmed that one egg was cracked. Bald eagles are very susceptible to toxins and it weakens their eggshells (it's why they were threatened for some time, too much DDT in the water and hence in the fish they were eating), and Jackie and Shadow live in a highly populated area, so it's worrisome. But I also could use a break from nest watch, and maybe they could also use a break from being parents. Last year was really hard and their first year raising two successful fledges. Most of the time the chicks don't make it, or at least most of them don't. I don't really want to go through that.

I think Jackie may suspect, like me, that we are going to have a very late and difficult winter. They can lay a second clutch if the first one fails, but I worry that would entail super harsh weather for super young chicks, this is how many of their chicks died in the past, just, exposure. :( We'll see what she decides to do. I really kinda hope there are no more eggs this year. I may not watch if there are. Eagles are neat but also kind of brutal. They eat *so* much fish. And water foul. And their babies perish. It's just a lot of the harshness of life right in your face.

Gotta take the good with the bad in this wild existence.

Will try to share some images and videos from the last week. I didn't do much other than work yesterday, was mostly recovering from a super fun, super long night at Shadowplay. Derek was on fire and it was a good time. His birthday bash is next week. Will try to think of something special for him.

Did a little crafty project this morning for Josh - he doesn't like cards and it's his birthday weekend and we're not at Summit Prairie like we're supposed to be so I want to do something special for him. So I made him a little garland for his bedroom door. His birthday falls right before Imbolc and the Chinese New Year, so there are some valentines vibes in the air, but we're still in the depths of winter.


you are loved.

Full moon tonight. It's so foggy still! I love it.

...photo sharing....


Neahkanie mt with Josh Sunday


Frosted trees and Loowit from Dog Mt summit trail


dramatic winter landscape in the gorge


happy place (Dog Mt Summit - I hope at least some of my ashes make it up here)


dirty mirror club selfie from Thursday

Dog on Tuesday (the tiniest little snowflakes fell)



Coffin Thursday, it was super busy but I slipped into the coffin room for a break and grabbed a lil snipit of what I generally do when I go there (minus the usual dramatic lights and a bunch of sexy people to flirt with). Charlie in particular looked soooo amazing Thursday, she came in a DRESS which is unheard of, she is a friend of Finley's and I adore the way she dances, she shreds. Manders also gave me lots of attention that night, as did Chanti and Mitch (he's a sweetie). Kiyoki looked amazing as always. Malkom and lots of other regulars were around for hugs and getting down. Lots of random cuties everywhere also. I hung out with Duncan for a bit, but I stayed long after he left. I was sooooo tired and also very happy.

petra: A woman grinning broadly (Shirley - Good day)
[personal profile] petra
From a Tumblr post by [tumblr.com profile] petewentzisblack1312, quoted in full for people who don't Tumbl:

heres my challenge to everyone for next month, for black history month. any time you want to draw inspiration from art, like poetry, music etc, pick a black artist. web weave with langston hughes and james baldwin and jamaica kinkaid and hanif abdurraqib and derek walcott and set your edits to meghan thee stallion and beyoncé and eartha kitt and coltrane and invoke basquiat in your art and it can be fanworks or original stuff and importantly, it doesnt have to be about race. obviously be cognizant of the context of the art youre using because a lot of the artists i mention specifically create art about racism but like. take your white doomed yaoi ship and make a webweave to poem by langston hughes. set an edit to body by meghan thee stallion. engage with black art in all contexts.

Check the post's tags out for suggestions of artists to explore!
[personal profile] cosmolinguist

Mostly Moira of course.

But I'm also missing my DVD boxset that included Waiting for Guffman and A Mighty Wind.

2025 Snowflake Wrap-up Post

Jan. 31st, 2026 12:41 pm
florianschild: The words Snowflake Challenge overtop of a snowy scene of shops lit up at night (A Night Snowflake Challenge Icon)
[personal profile] florianschild posting in [community profile] snowflake_challenge
We've reached the closing curtain of our beautiful Snowflake Challenge 2026. It's been a whirlwind month of fun, community, and lots of creativity! One of the best parts of this challenge is that it truly lives up to its name and its original inspiration: every single year that we come together to celebrate is a unique circumstance of participants, mods, prompts, graphics, challenges, and celebrations. Every year is a unique snowflake in and of itself, never again to be replicated in the exact same pattern. I hope everyone felt some enjoyment and appreciation during the past month, and of course please continue to post your responses and fills because there is no deadline to this challenge!

Thank you so much to all the participants. Thank you especially to those who took the time to interact with fellow participants and make the community feel so alive! And of course thank you to all the mods who went above and beyond and especially to [personal profile] tjs_whatnot, our co-admin who has worked really hard this month to keep everything running smoothly.

We do have a poll below to get your feedback on the challenge, if that's something you're interested in doing. We really appreciate it and we take all your responses into consideration when planning for next year.

Peace and happy late winter season to all!

Poll under the cut! )
steepholm: (Default)
[personal profile] steepholm
We haven't heard yet from George - who, being born in 1813, is the youngest of Weeden Butler's Cheyne Walk correspondents. His letters to his eldest brother tend to focus on the garden and on animals, whether considered as pets, livestock or food. This is typical, written when he was ten years old:

Chelsea, October 23rd

Dear Weeden

I do Saesar [sic] with John, Edward, and Henry Wylde; and we have done three pages in it, since I began. I have left off Corderious [sic] a long time. Would you be so kind as to lend me an Ovid? Charles Giberne killed two rabbits, one black and the other brown, and he had a great feast with Strachy [sic] and the two Hancocks, Papa has given me an Enfield’s Speaker with four pictures in it, two men came to ask Papa’s leave to build a house in Mr Depuis’ [sic] Garden, and Papa said that he had no Objection; but that they were not to make any windows to look in the playground: and they have begun to build it. The Hancocks are making an arbour in their garden, and have lengthened it down to Bowerbank’s garden. They have made a trench round the earth, as I have made mine. Bowerbank and I collected a great many bones, and I emtyed [sic] them out two days ago, and they were all over good fishing gentles. Miss Brunell [sic] came here and she says, that her Papa and brother are ill. I remain, your affectionate brother,

George Butler


In case you don't know (I had to look it up), fishing gentles are blowfly larvae, good for bait. As for the people mentioned: Strachey we've already met; Charles Giberne would go on to be the father of Agnes Giberne, a children's and popular science writer; while Bowerbank is almost certainly Louis Quier Bowerbank, who (as any fule know) did so much to reform mental healthcare in his birthplace of Jamaica.

It's nice when letters by different people refer to the same events, and we get a bit more detail on the projected new house in a letter from Fanny, written at the same time. Fanny, aged twelve, is clearly testing her powers of literary expression. She would go on to become the family poet, or what her nephew Gerard would describe acerbically as "a determined rhymer", but I quite like her turn of phrase in describing the playing style of the infant Isabella:

A gentleman of the name of King is building a house at the bottom of our playground, in Mr Dupuis’ garden. He is a paper stainer, & says “he is building it to dry his paper.” He came the other day to ask Papa’s leave, without which Papa says he could not have done it. The windows are not to face the playground. George was mightily pleased with your letter and got through all the prosy part very heroically without once giving it to Papa to read. The Hancocks have been making their garden much longer. Mine is getting on very well and my Myrtle is beginning to blossom very nicely. The box of playthings that you gave to Isabella has begun Alas! to feel the heavy hand of time. Legs and arms have been broken off without mercy. However, the stumps still remain and she seems as fond of them as ever.


A couple of months later, in the run up to Christmas, we find elder sister Anne (aged 15) party planning. Have things changed much in the last two centuries? But of course, since her mother's death the previous year she is now mistress of the house, and takes these things seriously:

I hope we shall be able to have a little dance these holidays. I have planned it all, and have made out a list of about 40 or 42 persons, whom I should like to come. When you are at home, we must think about it. I think we might have the dance in the School room, if there were many people coming, or in the dancing room if there not above 16 or 20, and then we might have the tea and supper, in the study as that is a ???er room than the parlour, and would be more handy, as it opens into the Schoolroom. The only objection I have to the Schoolroom is that it is so much disfigured by the boys. The walls are so covered with ink. We might have the green forms from the dancing room down, and it would be very easy to cover two more with green, and I daresay 4 would be enough, and they take up much less room than chairs. I think that we might cover the part over the fireplace with artificial flowers, as those were made at Mrs Christie’s and that is the most conspicuous part, and I think the worst in the room. Out of my list of 40, perhaps not above 25 would come, but it is always best to send out about 20 invitations first and then see how many of them will come, and then if more are wanted to send about 10 more, and so on. Will you have as many as you want. I will send you a list of those I thought of, perhaps you will think of some more to add to it. I daresay you will not know all the names, but some of them are great friends of Fanny’s school and some are my friends. It is a good plan to make out a large list and then we can ask first those we wish most to come and if they can not, we can make up the numbers we want by others. I believe the party at Mrs Christie’s will be about the 30th of the next month.


Let us end in July 1825, where we find Anne reporting on a couple of delightful outings in a much more rural London, complete with gypsies:

On Monday Miss Gardiner, Fanny & I went for a walk to Putney, and along the towing path about a mile or rather more, we set out directly after breakfast & took our provisions with us, & also books and work [i.e. needlework]. We spent a delightful day in the fields & came home to tea at 7. Yesterday we had Mr Johson’s cart and set off at half past 9 in the morning round by Vauxhall, Miss Eady’s, Lewisham, Sydenham & to Norwood where we dined & had tea & came home at 6 through Brixton, Clapham, Kennington & Battersea. At Norwood we were surrounded [by] gypsies. Mary had her fortune told. They wanted me badly to have mine told, one of them said I was born to riches, that I should have a handsome present soon & a lot of nonsense. Isabella Gardiner is to marry once more. (I suppose they thought she was a widow.) We had a beautiful ride, and when we liked we got out and walked. We took a great many things with us. Isabella was quite out of her mind with joy. I never heard her laugh so & say such drole [sic] things before. ... I shall send you a piece of cake which I hope you will like. I am sorry to say Cook did not bake it half enough.


What became of these children? They had very different fates. The shortest-lived was young George, who died aged just 16, in 1830. He was followed by the end of the decade by Anne, who died in childbirth, aged 29, a couple of years after marrying. (Her son was still born.) Weeden himself made it to middle age, although he outlived all five children from his first marriage and was widowed, then remarried and fathered five more. Fanny made her three score and ten, while Tom, my own ancestor, was the longest lived of all, seeing ten children grow to adulthood before dying at the age of 97.

And Isabella? She was also long-lived - she almost made 88 - growing by the end to resemble Queen Victoria (with whom she was a near contemporary) to an almost uncanny degree.
[syndicated profile] scalziwhatever_feed

Posted by Athena Scalzi

Though I am a bougie bitch, there’s nothing quite like a mug full of Swiss Miss hot chocolate. I am an especially big fan of their Marshmallow flavor, so you can imagine my shock when I learned about their Marshmallow Lovers flavor that comes with even more dehydrated white chalk block marshmallows.

I’m willing to bet you didn’t even realize there were two different Marshmallow varieties of Swiss Miss to choose from. Aren’t you so glad I taught you something useful?

Anyways, I, as a Marshmallow lover, decided to see which Marshmallow Swiss Miss variety was superior. Were there enough marshmallows in the Marshmallow flavor to sate my love of them, or did I need to purchase the Marshmallow Lovers box?

Using a digital scale and some math (not easy for me), I have come up with some numbers for your consideration.

So, if you went to Kroger right now and were wanting to buy just a regular, standard size pack of hot chocolate, you’d have your choice between an 8-pack of the Marshmallow Swiss Miss, and a 6-pack of the Marshmallow Lovers Swiss Miss. Both are currently listed as selling for $2.99. I’m sure you’re wondering, well why does the lovers pack have two fewer envelopes than the regular Marshmallow pack? It’s actually because each hot chocolate packet in the Marshmallow Lovers box comes attached to a separate packet that contains the marshmallows, whereas the regular Marshmallow packs have the marshmallows in the hot chocolate envelope rather than being a separate entity.

Anyways, I decided to rip each of one open and weigh them out.

I went with the Marshmallow Lovers packet first. After zeroing out a bowl on a digital scale, I dumped only the contents of the hot chocolate packet into the bowl. The powder came out to 40 grams. I then threw in the marshmallows. The total weight was now 45 grams. A whopping 5 grams of marshmallows in the Marshmallow Lovers packet.

I zeroed out a new bowl so there was no residual powder to contribute to the weight of the Marshmallow packet. I dumped it in the new bowl, then carefully removed each marshmallow from the powder so I could weigh the powder alone first. 38 grams of powder. I threw the marshmallows back in. 39 grams.

I could hardly believe my eyes. A measly one gram of marshmallows in the Marshmallow pack? It felt like too little, but if you go for the upgrade of the Marshmallow Lovers, you lose out a whole two envelopes!

If you add it all up, in the entire Marshmallow box, there is 304 grams of hot chocolate, and 8 grams of marshmallows. For the Marshmallow Lovers, we’re looking at 240 grams of hot chocolate, and 30 grams of marshmallows. 25% less powder, but almost 4 times the amount of marshmallows. Is it worth it to buy the Marshmallow Lovers package? It’s tough to say.

Part of me is tempted to buy the Marshmallow Lovers package just so Swiss Miss knows there’s someone out there that loves their marshmallows. They have to see demand if I want them to keep making it, right?

On the other hand, I could just buy regular Swiss Miss and put my own marshmallows in it. I don’t need Swiss Miss to supply me with their little freaky mallows, I can just throw mini Jet-Puffed marshies in any cup of hot chocolate I want, and as many as I want. I am not limited to a mere one or even five grams.

For now, I will drink the Marshmallow one, because the 30-pack of it was selling for a really good price, so it just made sense to get the bulk box. I will absolutely go through it all.

Do you like hot chocolate? What do you like to top yours with? Have you tried the Marshmallow Lovers variety yourself? Let me know in the comments, and have a great day!

-AMS

Profile

mtbc: photograph of me (Default)
Mark T. B. Carroll

January 2026

S M T W T F S
    123
456789 10
11121314151617
18 192021222324
252627 28 293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 2nd, 2026 11:06 am
Powered by Dreamwidth Studios