• 0 Posts
  • 62 Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle


  • From a business perspective it makes sense, to throw all the rendering to the devices to save cost.

    Not just to save cost. It’s basically OS-agnostic from the user’s point of view. The web app works fine in desktop Linux, MacOS, or Windows. In other words, when I’m on Linux I can have a solid user experience on apps that were designed by people who have never thought about Linux in their life.

    Meanwhile, porting native programs between OSes often means someone’s gotta maintain the libraries that call the right desktop/windowing APIs and behavior between each version of Windows, MacOS, and the windowing systems of Linux, not all of which always work in expected or consistent ways.



  • Yeah, from what I remember of what Web 2.0 was, it was services that could be interactive in the browser window, without loading a whole new page each time the user submitted information through HTTP POST. “Ajax” was a hot buzzword among web/tech companies.

    Flickr was mind blowing in that you could edit photo captions and titles without navigating away from the page. Gmail could refresh the inbox without reloading the sidebar. Google maps was impressive in that you could drag the map around and zoom within the window, while it fetched the graphical elements necessary on demand.

    Or maybe web 2.0 included the ability to implement states in the stateless HTTP protocol. You could log into a page and it would only show you the new/unread items for you personally, rather than showing literally every visitor the exact same thing for the exact same URL.

    Social networking became possible with Web 2.0 technologies, but I wouldn’t define Web 2.0 as inherently social. User interactions with a service was the core, and whether the service connected user to user through that service’s design was kinda beside the point.


  • Longer queries give better opportunities for error correction, like searching for synonyms and misspellings, or applying the right context clues.

    In this specific example, “is Angelina Jolie in Heat” gives better results than “Angelina Jolie heat,” because the words that make it a complete sentence question are also the words that give confirmation that the searcher is talking about the movie.

    Especially with negative results, like when you ask a question where the answer is no, sometimes the semantic links in the kndex can get the search engine to make suggestions of a specific mistaken assumption you’ve made.


  • GamingChairModel@lemmy.worldtoLemmy Shitpost@lemmy.worldIn heat
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    9 months ago

    Why do people Google questions anyway?

    Because it gives better responses.

    Google and all the other major search engines have built in functionality to perform natural language processing on the user’s query and the text in its index to perform a search more precisely aligned with the user’s desired results, or to recommend related searches.

    If the functionality is there, why wouldn’t we use it?


  • GamingChairModel@lemmy.worldtoLemmy Shitpost@lemmy.worldIn heat
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    9 months ago

    Search engine algorithms are way better than in the 90s and early 2000s when it was naive keyword search completely unweighted by word order in the search string.

    So the tricks we learned of doing the bare minimum for the most precise search behavior no longer apply the same way. Now a search for two words will add weight to results that have the two words as a phrase, and some weight for the two words close together in the same sentence, but still look for each individual word as a result, too.

    More importantly, when a single word has multiple meanings, the search engines all use the rest of the search as an indicator of which meaning the searcher means. “Heat” is a really broad word with lots of meanings, and the rest of the search can help inform the algorithm of what the user intends.


  • Honestly, this is an easy way to share files with non-technical people in the outside world, too. Just open up a port for that very specific purpose, send the link to your friend, watch the one file get downloaded, and then close the port and turn off the http server.

    It’s technically not very secure, so it’s a bad idea to leave that unattended, but you can always encrypt a zip file to send it and let that file level encryption kinda make up for lack of network level encryption. And as a one-off thing, you should close up your firewall/port forwarding when you’re done.





  • That’s why I think the history of the U.S. phone system is so important. AT&T had to be dragged into interoperability by government regulation nearly every step of the way, but ended up needing to invent and publish the technical standards that made federation/interoperability possible, after government agencies started mandating them. The technical infeasibility of opening up a proprietary network has been overcome before, with much more complexity at the lower OSI layers, including defining new open standards regarding the physical layer of actual copper lines and switches.


  • I’d argue that telephones are the original federated service. There were fits and starts to getting the proprietary Bell/AT&T network to play nice with devices or lines not operated by them, but the initial system for long distance calling over the North American Numbering Plan made it possible for an AT&T customer to dial non-AT&T customers by the early 1950’s, and set the groundwork for the technical feasibility of the breakup of the AT&T/Bell monopoly.

    We didn’t call it spam then, but unsolicited phone calls have always been a problem.


  • Loops really isn’t ready for primetime. It’s too new and unpolished, and will need a bit more time.

    I wonder if peertube can scale. YouTube has a whole sophisticated system for ingesting and transcoding videos into dozens of formats, with tradeoffs being made on computational complexity versus file size/bandwidth, which requires some projection on which videos will be downloaded the most times in the future (and by which types of clients, with support for which codecs, etc.). Doing this can require a lot of networking/computing/memory/storage resources, and I wonder if the software can scale.