|
| uniqueuid wrote:
| These speedups are awesome, but of course one wonders why they
| haven't been a low-hanging fruit over the past 25 years.
|
| Having read about some of the changes [1], it seems like the
| python core committers preferred clean over fast implementations
| and have deviated from this mantra with 3.11.
|
| Now let's get a sane concurrency story (no multiprocessing /
| queue / pickle hacks) and suddenly it's a completely different
| language!
|
| [1] Here are the python docs on what precisely gave the speedups:
| https://docs.python.org/3.11/whatsnew/3.11.html#faster-cpyth...
|
| [edit] A bit of explanation what I meant by low-hanging fruit:
| One of the changes is "Subscripting container types such as list,
| tuple and dict directly index the underlying data structures."
| Surely that seems like a straight-forward idea in retrospect. In
| fact, many python (/c) libraries try to do zero-copy work with
| data structures already, such as numpy.
| fastball wrote:
| > Now let's get a sane concurrency story
|
| This is in very active development[1]! And seems like the Core
| Team is not totally against the idea[2].
|
| [1] https://github.com/colesbury/nogil
|
| [2] https://pyfound.blogspot.com/2022/05/the-2022-python-
| languag...
| siekmanj wrote:
| Python has actually had concurrency since about 2019:
| https://docs.python.org/3/library/asyncio.html. Having used it
| a few times, it seems fairly sane, but tbf my experience with
| concurrency in other languages is fairly limited.
|
| edit: ray https://github.com/ray-project/ray is also pretty
| easy to use and powerful for actual parallelism
| sgtlaggy wrote:
| Concurrency in Python is a weird topic, since multiprocessing
| is the only "real" concurrency. Threading is "implicit"
| context switching all in the same process/thread, asyncio is
| "explicit" context switching. On top of that, you also have
| the complication of the GIL. If threads don't release the
| GIL, then you can't effectively switch contexts.
| hn_saver wrote:
| Threading IS concurrency. When you say "real" concurrency,
| you actually mean parallelism.
| bornfreddy wrote:
| Not in CPython it isn't. Threading in CPython doesn't
| allow 2 threads to run concurrently (because of GIL). As
| GP correctly stated, you _need_ multiprocessing (in
| CPython) for concurrency.
| dragonwriter wrote:
| > Concurrency in Python is a weird topic, since
| multiprocessing is the only "real" concurrency.
|
| You are confusing concurrency and parallelism.
|
| > Threading is "implicit" context switching all in the same
| process/thread
|
| No, threading is separate native threads but with a lock
| that prevents execution of Python code in separate threads
| simultaneously (native code in separate threads, with at
| most on running Python, can still work.)
| dekhn wrote:
| Asyncio violates every aspect of compositional orthogonality
| just like decorators you can't combine it with anything else
| without completely rewriting your code around its
| constrictions. It's also caused a huge amount of pip
| installation problems around the AWS CLI and boto
| Joker_vD wrote:
| Having both Task and Future was a pretty strange move; and
| the lack of static typing certainly doesn't help: the
| moment you get a Task wrapping another Task wrapping the
| actual result, you _really_ want some static analysis tool
| to tell you that you forgot one "await".
| stepanhruda wrote:
| I'm a fan of asyncio, the parent probably meant to say
| parallelism though, since that's what getting rid of GIL
| unlocks.
| uniqueuid wrote:
| I have used asyncio in anger quite a bit, and have to say
| that it seems elegant at first and works very well for some
| use cases.
|
| But when you try to do things that aren't a map-reduce or
| Pool.map() pattern, it suddenly becomes pretty warty. E.g.
| scheduling work out to a processpool executor is ugly under
| the hood and IMO ugly syntactically as well.
| tandav wrote:
| > scheduling work out to a processpool executor is ugly
| under the hood and IMO ugly syntactically as well.
|
| Are you talking about this example?
| https://docs.python.org/3/library/asyncio-
| eventloop.html#asy...
| [deleted]
| BugsJustFindMe wrote:
| I find asyncio to be horrendous, both because of the
| silliness of its demands on how you build your code and also
| because of its arbitrarily limited scope.
| Thread/ProcessPoolExecutor is personally much nicer to use
| and universally applicable...unless you need to accommodate
| Ctrl-C and then it's ugly again. But fixing _that_ stupid
| problem would have been a better expenditure of effort than
| asyncio.
| coldtea wrote:
| > _I find asyncio to be horrendous, both because of the
| silliness of its demands on how you build your code and
| also because of its arbitrarily limited scope._
|
| Do you compare it to threads and pools, or judge it on its
| merits as an async framework (with you having experience of
| those that you think are done better elsewhere, e.g. in
| Javascript, C#, etc)?
|
| Because both things you mention "demands on how you build
| your code" and "limited scope" are part of the course with
| async in most languages that aren't async-first.
| BugsJustFindMe wrote:
| > _Because both things you mention "demands on how you
| build your code" and "limited scope" are part of the
| course with async in most languages_
|
| I don't see how "asyncio is annoying and can only be used
| for a fraction of scenarios everywhere else too, not just
| here" is anything other than reinforcement of what I
| said. OS threads and processes already exist, can already
| be applied universally for everything, and the pool
| executors can work with existing serial code without
| needing the underlying code to contort itself in very
| fundamental ways.
|
| Python's version of asyncio being no worse than someone
| else's version of asyncio does not sound like a strong
| case for using Python's asyncio vs fixing the better-in-
| basically-every-way concurrent futures interface that
| already existed.
| coldtea wrote:
| > _I don 't see how "asyncio is annoying and can only be
| used for a fraction of scenarios everywhere else too, not
| just here" is anything other than reinforcement of what I
| said._
|
| Well, I didn't try to refute what you wrote (for one,
| it's clearly a personal, subjective opinion).
|
| I asked what I've asked merely to clarify whether your
| issue is with Python's asyncio (e.g. Python got it wrong)
| or with the tradeoffs inherent in async io APIs in
| general (regardless of Python).
|
| And it seems that it's the latter. I, for one, am fine
| with async APIs in JS, which have the same "problems" as
| the one you've mentioned for Python's, so don't share the
| sentiment.
| BugsJustFindMe wrote:
| > _I 've asked merely to clarify whether your issue is
| with Python's asyncio (e.g. Python got it wrong) or with
| the tradeoffs inherent in async io APIs in general
| (regardless of Python)_
|
| Both, but the latter part is contextual.
|
| > _I, for one, am fine with async APIs in JS_
|
| Correct me if you think I'm wrong, but JS in its native
| environment (the browser) never had access to the OS
| thread and process scheduler, so the concept of what
| could be done was limited from the start. If all you're
| allowed to have is a hammer, it's possible to make a fine
| hammer.
|
| But
|
| 1. Python has never had that constraint
|
| 2. Python's asyncio in particular is a shitty hammer that
| only works on special asyncio-branded nails
|
| and 3. Python already had a better futures interface for
| what asyncio provides and more before asyncio was added.
|
| The combination of all three of those is just kinda
| galling in a way that it isn't for JS because the
| contextual landscape is different.
| int_19h wrote:
| Try C# as a basis for comparison, then. It also has
| access to native threads and processes, but it adopted
| async - indeed, it's where both Python and JS got their
| async/await syntax from.
| fastball wrote:
| asyncio is still single-threaded due to the GIL.
| siekmanj wrote:
| True, but it's not trying to be multi-threaded, just
| concurrent.
| ikinsey wrote:
| While not ideal, this can be mitigated with
| multiprocessing. Python asyncio exposes interfaces for
| interacting with multiple processes [1].
|
| [1] https://docs.python.org/3/library/asyncio-
| eventloop.html#asy...
| ikinsey wrote:
| I love asyncio! It's a very well put together library. It
| provides great interfaces to manage event loops, io, and some
| basic networking. It gives you a lot of freedom to design
| asynchronous systems as you see fit.
|
| However, batteries are not included. For example, it provides
| no HTTP client/server. It doesn't interop with any
| synchronous IO tools in the standard library either, making
| asyncio a very insular environment.
|
| For the majority of problems, Go or Node.js may be better
| options. They have much more mature environments for managing
| asynchrony.
| 1337shadow wrote:
| It depends how you see it
| https://journal.stuffwithstuff.com/2015/02/01/what-color-
| is-...
| ikinsey wrote:
| This is exactly why Go is a better option for async use
| cases.
| int_19h wrote:
| Until you need to do async FFI. Callbacks and the
| async/await syntactic sugar on top of them compose nicely
| across language boundaries. But green threads are VM-
| specific.
| notpushkin wrote:
| It does indeed, but personally, I believe with
| async/await the main pain point of this post (callback
| hell) is essentially gone.
| emmelaich wrote:
| Perhaps they took a lesson from Perl; its code base was so
| complex as to be near unmaintainable.
|
| In addition to the other point here about speed not being a
| target in the first place.
| MrYellowP wrote:
| I don't wonder. For me it seems pretty clear.
|
| I believe the reason is that python does not need any low-
| hanging fruits to have people use it, which is why they're a
| priority for so many other projects out there. Low-hanging
| fruits attract people who can't reach higher than that.
|
| When talking about low-hanging fruits, it's important to
| consider who they're for. The intended target audience. It's
| important to ask ones self who grabs for low-hanging fruits and
| why they need to be prioritized.
|
| And with that in mind, I think the answer is actually obvious:
| Python never required the speed, because it's just so good.
|
| The language is _so_ popular, people search for and find ways
| around its limitations, which most likely actually even
| _increases_ its popularity, because it gives people a lot of
| space to tinker in.
| uniqueuid wrote:
| I see your point, but it directly conflicts with the effort
| many people put into producing extremely fast libraries for
| specific purposes, such as web frameworks (benchmarked
| extensively), ORMs and things like json and date parsing, as
| seen in the excellent ciso8601 [1] for example.
|
| [1] https://github.com/closeio/ciso8601
| mywittyname wrote:
| Isn't this the point made in the last paragraph, about how
| people find ways around the limitations?
| dagmx wrote:
| I disagree that it conflicts. There's an (implied) ceiling
| on Python performance, even after optimizations. The fear
| has always been that removing the design choices that cause
| that ceiling, would result in a different, incompatible
| language or runtime.
|
| If everyone knows it's never going to reach the performance
| needed for high performance work, and there's already an
| excellent escape hatch in the form of C extensions, then
| why would people be spending time on the middle ground of
| performance? It'll still be too slow to do the things
| required, so people will still be going out to C for them.
|
| Personally though, I'm glad for any performance increases.
| Python runs in so much critical infrastructure, that even a
| few percent would likely be a considerable energy savings
| when spread out over all users. Of course that assumes
| people upgrade their versions...but the community tends to
| be slow to do so in my experience.
| avgcorrection wrote:
| > I believe the reason is that python does not need any low-
| hanging fruits to have people use it, which is why they're a
| priority for so many other projects out there. Low-hanging
| fruits attract people who can't reach higher than that.
|
| Ah! So they are so tall that picking the low-hanging fruit
| would be too inconvenient for them.
|
| Talk about stretching an analogy too far.
| pdpi wrote:
| > Low-hanging fruits attract people who can't reach higher
| than that.
|
| Do we have completely different definitions of low-hanging
| fruit?
|
| Python not "requiring" speed is a fair enough point if you
| want to argue against large complex performance-focused
| initiatives that consume too much of the team's time, but the
| whole point of calling something "low-hanging fruit" is
| precisely that they're easy wins -- get the performance
| without a large effort commitment. Unless those easy wins
| hinder the language's core goals, there's no reason to
| portray it as good to actively avoid chasing those wins.
| int_19h wrote:
| It can also be a "low-hanging fruit" in a sense that it's
| possible to do without massive breakage of the ecosystem
| (incompatible native modules etc). That is, it's still
| about effort - but effort of the people using the end
| result.
| MrYellowP wrote:
| > is precisely that they're easy wins -- get the
| performance without a large effort commitment.
|
| Oh, that's not how I interpret low-hanging fruits. From my
| perspective a "low-hanging fruit" is like cheap pops in
| wrestling. Things you say of which you _know_ that it will
| cause a positive reaction, like saying the name of the town
| you 're in.
|
| As far as I know, the low-hanging fruit isn't named like
| that because of _the fruit_ , but because of _those who
| reach for it._
|
| My reason for this is the fact that the low-hanging fruit
| is "being used" specifically _because_ there 's lots of
| people who can reach it. The video gaming industry as a
| whole, but specifically the mobile space, pretty much
| serves as perfect evidence of that.
|
| Edit:
|
| It's done _for a certain target audience_ , because _it
| increases exposure and interest_. In a way, one might even
| argue that _the target audience itself_ is a low-hanging
| fruit, because the creators of the product didn 't care
| much about quality and instead went for that which simply
| _impresses_.
|
| I don't think python would have gotten anywhere if they had
| aimed for that kind of low-hanging fruit.
| pdpi wrote:
| Ah, ok. We're looking at the same thing from different
| perspectives then.
|
| What I'm describing, which is the sense I've always seen
| that expression used as in engineering, and what GP was
| describing, is: this is an easy low-risk project that
| have a good chance of producing results.
|
| E.g. If you tell me that your CRUD application suffers
| from slow reads, the low-hanging fruit is stuff like
| making sure your queries are hitting appropriate indices
| instead of doing full table scans, or checking that
| you're pooling connections instead of creating/dropping
| connections for every individual query. Those are easy
| problems to check for and act on that don't require you
| to try to grab the fruit hard-to-reach fruit at the top
| of the tree like completely redesigning or DB schema or
| moving to a new DB engine altogether.
| simias wrote:
| I suspect that the amount of people and especially companies
| willing to spend time and money optimizing Python are fairly
| low.
|
| Think about it: if you have some Python application that's
| having performance issues you can either dig into a foreign
| codebase to see if you can find something to optimize (with no
| guarantee of result) and if you do get something done you'll
| have to get the patch upstream. And all that "only" for a 25%
| speedup.
|
| Or you could rewrite your application in part or in full in Go,
| Rust, C++ or some other faster language to get a (probably)
| vastly bigger speedup without having to deal with third
| parties.
| dubbel wrote:
| You are right, that's usually not how it works.
|
| Instead, there are big companies, who are running let's say
| the majority of their workloads in python. It's working well,
| it doesn't need to be very performant, but together all of
| the workloads are representing a considerable portion of your
| compute spend.
|
| At a certain scale it makes sense to employ experts who can
| for example optimize Python itself, or the Linux kernel, or
| your DBMS. Not because you need the performance improvement
| for any specific workload, but to shave off 2% of your total
| compute spend.
|
| This isn't applicable to small or medium companies usually,
| but it can work out for bigger ones.
| LtWorf wrote:
| There was some guarantee of result. It has been a long
| process but there was mostly one person who had identified a
| number of ways to make it faster but wanted financing to
| actually do the job. Seems Microsoft is doing the financing,
| but this has been going on for quite a while.
| ketralnis wrote:
| > you can either dig into a foreign codebase ... > Or you
| could rewrite your application
|
| Programmers love to save an hour in the library by spending a
| week in the lab
| prionassembly wrote:
| Everyone likes to shirk away from their jobs; engineers and
| programmers have ways of making their fun (I'm teaching
| myself how to write parsers by giving each project a DSL)
| look like work. Lingerie designers or eyebrow barbers have
| nothing of the sort, they just blow off work on TikTok or
| something.
| fragmede wrote:
| TikTok's got some fun coding content, if you can get the
| algorithm to surface it to you.
| Sesse__ wrote:
| Because Guido van Rossum just isn't very good with performance,
| and when others tried to contribute improvements, he started
| heckling their talk because he thought they were
| "condescending": https://lwn.net/Articles/754163/ And by this
| time, we've come to the point where the Python extension API is
| as good as set in stone.
|
| Note that all of the given benchmarks are microbenchmarks; the
| gains in 3.11 are _much_ less pronounced on larger systems like
| web frameworks.
| LtWorf wrote:
| Yeah breaking compatibility kills a language. He did the
| right thing.
| KptMarchewa wrote:
| Answer is very simple. Amount of people who got paid to make
| python fast was rounded to 0.
| dekhn wrote:
| Not really. There were a couple engineers working at Google
| on a project called unladen swallow which was extremely
| promising but it eventually got canceled.
|
| The developer who worked at Microsoft to make iron python I
| think that was his full-time project as well and it was
| definitely faster than cpython at the time
| pstrateman wrote:
| Neither of those projects were ever going to be accepted
| upstream though.
| dekhn wrote:
| My hope was that iron pythonreplaced cpython as the
| standard but for a number of reasons that was never to
| be.
| dragonwriter wrote:
| Those people were not being paid to speed up CPython,
| though, but mostly-source-compatible (but not at all native
| extension compatible) alternative interpreters.
| coldtea wrote:
| That's because the core team wasn't friendly to them
| making changes in CPython.
|
| Not because they deliberately only wanted to build an
| alternative interpreter.
| dragonwriter wrote:
| That's absolutely not the case with IronPython, where
| being tied to the .NET ecosystem was very much the point.
|
| But if it was just core-team resistance and not a more
| fundamentally different objective than just improving the
| core interpreter performance, maintaining support for
| native extensions while speeding up the implementation
| would have been a goal even if it had to be in a fork.
|
| They were solving a fundamentally different (and less
| valuable to the community) problem than the new faster
| CPython project.
| dekhn wrote:
| If I remember correctly iron python ran on mono it just
| didn't have all of the .net bits. I remember at the
| python conference when it was introduced he actually
| showed how you could bring up clippy or a wizard
| programmatically from python talking directly to the
| operating system through its native apis.
|
| Really the value came from the thread safe
| container/collections data types which are the foundation
| of high performance programming
| LtWorf wrote:
| Too bad .net is designed for windows and on linux the
| APIs feel like windows API that kinda sorta work on
| linux.
| dekhn wrote:
| I've used a few .net applications on Linux and they work
| pretty well. But I've never actually tried to develop
| using mono. I really wish Microsoft had got all in on
| linux .net support 1 years ago
| coldtea wrote:
| > _That 's absolutely not the case with IronPython, where
| being tied to the .NET ecosystem was very much the
| point._
|
| That might very well be, but I talked about
| UnladdedSwallow and such "native" attempts.
|
| Not attempts to port to a different ecosystem like Jython
| or IronPython or the js port.
|
| > _maintaining support for native extensions while
| speeding up the implementation would have been a goal
| even if it had to be in a fork._
|
| That's not a necessary goal. One could very well want to
| improve CPython, the core, even if it meant breaking
| native extensions. Especially if doing so meant even more
| capabilities for optimization.
|
| I, for one, would be fine with that, and am pretty sure
| all important native extensions would have adapted quite
| soon - like they adapted to Python 3 or the ARM M1 Macs.
|
| In fact, one part of the current proposed improvements
| includes some (albeit trivial) fixes to native
| extensions.
| KptMarchewa wrote:
| Core team wasn't friendly to breaking C API.
|
| Current changes don't really break it.
| tnvaught wrote:
| "The developer who worked at Microsoft to make iron python"
| - Jim Hugunin [1] (say his name!) to whom much is owed by
| the python community and humanity, generally.
|
| [1] https://en.wikipedia.org/wiki/Jim_Hugunin
| dekhn wrote:
| I'm sorry I had forgotten at the time. I almost got to
| work with him at Google. I agree he's a net positive for
| humanity (I used numeric heavily back in the day)
| switch007 wrote:
| Was the project run by a team based in Africa or Europe?
| dekhn wrote:
| I don't know! [Whoosh]
| Jasper_ wrote:
| > one wonders why they haven't been a low-hanging fruit over
| the past 25 years
|
| Because the core team just hasn't prioritized performance, and
| have actively resisted performance work, at least until now.
| The big reason has been about maintainership cost of such work,
| but often times plenty of VM engineers show up to assist the
| core team and they have always been pushed away.
|
| > Now let's get a sane concurrency story
|
| You really can't easily add a threading model like that and
| make everything go faster. The hype of "GIL-removal" branches
| is that you can take your existing threading.Thread Python
| code, and run it on a GIL-less Python, and you'll instantly get
| a 5x speedup. In practice, that's not going to happen, you're
| going to have to modify your code substantially to support that
| level of work.
|
| The difficulty with Python's concurrency is that the language
| doesn't have a cohesive threading model, and many programs are
| simply held alive and working by the GIL.
| dekhn wrote:
| Yup. Sad but true. I just wanted c++ style multithreading
| with unsafe shared memory but it's too late. I was impressed
| by the recent fork that addresses this in a very
| sophisticated way, but realistically gil cpython will keep
| finding speed ups to stave off the transition.
| pca006132 wrote:
| wondering how that interacts with c/c++ extensions that
| presumes the existance of the GIL
| dekhn wrote:
| A good place to start learning about this
| https://www.backblaze.com/blog/the-python-gil-past-
| present-a...
| missblit wrote:
| So we have:
|
| * Statically allocated ("frozen") core modules for fast imports
|
| * Avoid memory allocation for frames / faster frame creation
|
| * Inlined python functions are called in pure python without
| needing to jump through C
|
| * Optimizations that take advantage of speculative typing
| (Reminds me of Javascript JIT compilers -- though according to
| the FAQ Python isn't JIT yet)
|
| * Smaller memory usage for frames, objects, and exceptions
|
| Dang that certainly does sound like low hanging fruit. There's
| probably a lot more opportunities left if they want Python to
| go even faster.
| dagmx wrote:
| Things may seem like low hanging fruit in the abstract bullet
| point, but the work needed may be immense.
| FartyMcFarter wrote:
| > Inlined python functions are called in pure python without
| needing to jump through C
|
| Given that Python is interpreted, it's quite unclear what
| this could mean.
|
| Also, what does it mean to "call" an inlined function?? Isn't
| the point of inline functions that they don't get called at
| all?
| liuliu wrote:
| It seems to be just previously, it does:
| switch (op) { case "call_function":
| interpret(op.function) ... }
|
| Now it does: switch (op) { case
| "call_function": ... setup frame objects etc ...
| pc = op.function continue .... }
|
| Not sure if they just "inlined" it or use tail-call
| elimination trick.
| outworlder wrote:
| > Given that Python is interpreted, it's quite unclear what
| this could mean.
|
| Python is compiled. CPython runs bytecode.
|
| (If Python is interpreted, then so is Java without the
| JIT).
| FartyMcFarter wrote:
| Fair enough, but that doesn't help understanding the
| original sentence.
| enragedcacti wrote:
| It's a little confusing but I don't think they meant
| inlining in the traditional sense. its more like they
| inlined the C function wrapper around python functions.
|
| > During a Python function call, Python will call an
| evaluating C function to interpret that function's code.
| This effectively limits pure Python recursion to what's
| safe for the C stack.
|
| > In 3.11, when CPython detects Python code calling another
| Python function, it sets up a new frame, and "jumps" to the
| new code inside the new frame. This avoids calling the C
| interpreting function altogether.
|
| > Most Python function calls now consume no C stack space.
| This speeds up most of such calls. In simple recursive
| functions like fibonacci or factorial, a 1.7x speedup was
| observed. This also means recursive functions can recurse
| significantly deeper (if the user increases the recursion
| limit). We measured a 1-3% improvement in pyperformance.
| staticassertion wrote:
| Guido stepping away from the language will have a lot of impact
| on changes that were culturally guided.
| [deleted]
| mjw1007 wrote:
| Guido is one of the people leading the current performance
| efforts.
| staticassertion wrote:
| That's interesting. Thanks.
| Alex3917 wrote:
| > Of course one wonders why they haven't been a low-hanging
| fruit over the past 25 years.
|
| Because it's developed and maintained by volunteers, and there
| aren't enough folks who want to spend their volunteer time
| messing around with assembly language. Nor are there enough
| volunteers that it's practical to require very advanced
| knowledge of programming language design theory and compiler
| design theory as a prerequisite for contributing. People will
| do that stuff if they're being paid 500k a year, like the folks
| who work on v8 for Google, but there aren't enough people
| interested in doing it for free to guarantee that CPpython will
| be maintained in the future if it goes too far down that path.
|
| Don't get me wrong, the fact that Python is a community-led
| language that's stewarded by a non-profit foundation is imho
| its single greatest asset. But that also comes with some
| tradeoffs.
| mgaunard wrote:
| first, there are lots of volunteers that want to mess with
| assembly language.
|
| second, CPython is just a C interpeter, there isn't much
| assembly if any.
|
| third, contributing to CPython is sufficiently high-profile
| you could easily land a 500k-job just by putting it on your
| CV.
|
| No, those are not the real reasons why it hasn't happened
| before.
| Angostura wrote:
| Go on then - what are the _real_ reasons, in your mind?
| coldtea wrote:
| Unfriendly core team?
| mgaunard wrote:
| Why should major technology projects be "friendly" to
| random people?
|
| I think the word you're looking for is "politics".
| metadat wrote:
| Being friendly is inclusive and generally a winning
| strategy compared to others.
|
| The underlying source of the Politics of Python and
| associated perceptions stems from the core team's culture
| of not being "friendly".
| mgaunard wrote:
| People act according to their interests.
|
| They have no obligation to go out of their way to cater
| to yours.
|
| The entitlement of the new generation of open-source
| contributors to require political correctness or
| friendliness is so destructive it's ridiculous. I
| wouldn't want to be involved with any project that
| prioritizes such zealotry on top of practicality.
| metadat wrote:
| It's not about entitlement so much as about ensuring the
| project can reach it's full potential and continue to
| stay relevant and useful in perpetuity as people come and
| go.
|
| The world constantly changes, projects adapt or become
| less useful.
| staticassertion wrote:
| > Because it's developed and maintained by volunteers
|
| Who is working on python voluntarily? I would assume that,
| like the Linux kernel, the main contributors are highly paid.
| Certainly, having worked at Dropbox, I can attest to at least
| _some_ of them being highly paid.
| AndrewOMartin wrote:
| Raymond Hettinger gets his Python Contributor salary
| doubled ever couple of years.
| int_19h wrote:
| https://devguide.python.org/motivations/
| Fiahil wrote:
| > Now let's get a sane concurrency story (no multiprocessing /
| queue / pickle hacks) and suddenly it's a completely different
| language!
|
| Yes, and I would argue it already exists and is called Rust :)
|
| Semi-jokes aside, this is difficult and is not just about
| removing the GIL and enabling multithreading ; we would need to
| get better memory and garbage collection controls. Parts of
| what's make python slow and dangerous in concurrent settings
| are the ballooning memory allocations on large and fragmented
| workloads. A -Xmx would help.
| gostsamo wrote:
| A few people tried to excuse the slow python, but as far as I
| know the story, excuses are not necessary. Truth is that python
| was not meant to be fast, its source code was not meant to be
| fast, and its design was not optimized with the idea of being
| fast. Python was meant as a scrypting language that was easy to
| learn and work with on all levels and the issue of its slowness
| became important when it outgrew its role and became an
| application language powering large parts of the internet and
| the bigger part of the very expensive ML industry. I know that
| speed is a virtue, but it becomes a fundamental virtue when you
| have to scale and python was not meant to scale. So, yes, it is
| easy for people to be righteous and furious over the issue, but
| being righteous in hindsight is much easier than useful.
| zitterbewegung wrote:
| Meta has an active port to optimize instagram to make it
| faster and they just open sourced it to have optimizations
| merged back into CPython
|
| When you have so many large companies with a vested interest
| in optimization I believe that Python can become faster by
| doing realistic and targeted optimizations . The other
| strategies to optimize didn't work at all or just served
| internal problems at large companies .
| gostsamo wrote:
| O, I agree. As I said, when python and its uses scaled, it
| became quite necessary to make it fast. I like that it will
| be fast as well and I am not happy that it is slow at the
| moment. My point is that there are reasons why it was not
| optimized in the beginning and why this process of
| optimizations has started now.
| zitterbewegung wrote:
| Back when Python was started there was really C or C++
| for optimized programs and scripting languages like
| Python and Perl. But since Python had the ability for C
| Extensions it allowed it to bypass those problems. Since
| Python was easy to learn both web developers and
| scientists to learn. Then financial organizations started
| to get interested and that's really how Python cemented
| itself.
|
| What exactly do you do with Python that slows you down ?
| lanstin wrote:
| I worked on Python almost exclusively for maybe five
| years. Then I tried go. Each time I wrote a go program, I
| am giddy with excitement at how fast my first attempt is,
| scaling so smoothly with the number of cores. I also
| wrote a lot of fairly performant C in the 90s, so I know
| what computers can do in a second.
|
| I still use Python for cases when dev time is more
| important than execution time (which is rarer now that
| I'm working adjacent to "big data") or when I'm doing
| things like writing python to close gaps in the various
| arrays of web apps provided for navigating the corporate
| work flow, and if I went as fast as a 64 core box let me,
| we'd have some outages in corp github or artifactory or
| the like, so I just do it one slow thing at a time on 1
| core and wait for the results. Maybe multiprocessing with
| 10 process worker pool once I'm somewhat confident in the
| back end system I am talking to.
|
| (edit: removed my normal email signoff)
| cercatrova wrote:
| You should try Nim, it's Python like but compiled so it's
| as far as C. These days if I want to script something
| (and don't need Python specific libraries like pandas) I
| use Nim.
| truthwhisperer wrote:
| zitterbewegung wrote:
| Okay you are using Python gluing two web services
| together which is what you deep acceptable that Python
| can do but can you just comment on the things that you
| don't use Python anymore due to it being slow?
|
| Don't take this the wrong way but I think you could be
| more specific. Are you saying that similar to Go it
| should be just faster in general?
| gary_0 wrote:
| > python was not meant to scale
|
| So many successful projects/technologies started out that
| way. The Web, JavaScript, e-mail. DNS started out as a
| HOSTS.TXT file that people copied around. Linus Torvalds
| announced Linux as "just a hobby, won't be big and
| professional like gnu". Minecraft rendered huge worlds with
| unoptimized Java and fixed-function OpenGL.
| ihaveajob wrote:
| Indeed, and that's why I love it. When I need extra
| performance, which is very rarely, I don't mind spending
| extra effort outsourcing it to a binary module. More often,
| the problem is an inefficient algorithm or data arrangement.
|
| I took a graduate level data structures class and the
| professor used Python among other things "because it's about
| 80 times slower than C, so you have to think hard about your
| algorithms". At scale, that matters.
| thesuperbigfrog wrote:
| Global Interpreter Lock (GIL) is also an issue affecting
| Python execution speed:
|
| https://en.wikipedia.org/wiki/Global_interpreter_lock
| dralley wrote:
| Not really, no.
|
| It prevents you from taking advantage of multiple cores.
| Doesn't really impact straight-line execution speed.
|
| A data structures course is primarily not going to be
| concerned with multithreading.
| atwood22 wrote:
| > Python was meant as a scrypting language that was easy to
| learn and work with on all levels.
|
| Being fast isn't contradictory with this goal. If anything,
| this is a lesson that so many developers forget. Things
| should be fast by default.
| pydry wrote:
| Fast typically comes with trade offs.
|
| Languages that tried to be all things to all people really
| havent done so well.
| BeetleB wrote:
| > Things should be fast by default.
|
| In over 90% of my work in the SW industry, being fast(er)
| was of no benefit to anyone.
|
| So no, it should not be fast by default.
| uoaei wrote:
| Ha! Good luck convincing end-users that that's true.
| BeetleB wrote:
| I don't need to. Less than 10% of end users have put in
| requests for performance improvements.
| zo1 wrote:
| Client doesn't care. As long as their advertising spend
| leads to conversion into your "slow" app, they're happy.
| vorticalbox wrote:
| Being first to market is usually more important than
| being faster then those who got there first.
| pizza234 wrote:
| > Being fast isn't contradictory with this goal. If
| anything, this is a lesson that so many developers forget.
| Things should be fast by default.
|
| It absolutely is contradictory. If you look at the
| development of programming languages interpreters/VMs,
| after a certain point, improvements in speed become a
| matter of more complex algorithms and data structures.
|
| Check out garbage collectors - it's true that Golang keeps
| a simple one, but other languages progressively increase
| its sophistication - think about Java or Ruby.
|
| Or JITs, for example, which are the latest and greatest in
| terms of programming languages optimization; they are
| complicated beasts.
| atwood22 wrote:
| Yes, you can spend a large amount of time making things
| faster. But note that Go's GC _is_ fast, even though it
| is simple. It 's not the fastest, but it is acceptably
| fast.
| YesThatTom2 wrote:
| Funny you should pick that example in a sub thread that
| you started with an assertion that code should be fast
| from by default.
|
| Go's GC was intentionally slow at first. They wanted to
| get it right THEN make it fast.
|
| No offense but you're not making a strong case. You're
| sounding like an inexperienced coder that hasn't yet
| learned that premature optimization is bad.
| atwood22 wrote:
| Just FYI, Go's GC mantra is "GC latency is an existential
| threat to Go."
| soperj wrote:
| When you only have so many hours to go around, you
| concentrate on the main goals.
| atwood22 wrote:
| My point is that you can write fast code just as easily
| as you can write slow code. So engineers should write
| fast code when possible. Obviously you can spend a lot of
| time making things faster, but that doesn't mean you
| can't be fast by default.
| jjnoakes wrote:
| > you can write fast code just as easily as you can write
| slow code
|
| I think some people can do this and some can't. For some,
| writing slow code is much easier, and their contributions
| are still valuable. Once the bottlenecks are problems,
| someone with more performance-oriented skills can help
| speed up the critical path, and slow code outside of the
| critical path is just fine to leave as-is.
|
| If you somehow limited contributions only to those who
| write fast code, I think you'd be leaving way too much on
| the table.
| zo1 wrote:
| Being fast requires effort. It's not always about raw
| performance of the language yo use, it's about using the
| right structures, algorithms, tradeoffs, solving the
| right problems, etc. It's not trivial and I've seen so
| many bad implementations in "fast" compiled languages.
| gus_massa wrote:
| You usually need more tricks for fast code. Bubble sort
| is easy to program (it' my default when I have to sort
| manually, and the data is has only like 10 items)
|
| There are a few much better options like mergesort or
| quicksort, but they have their tricks.
|
| But to sort real data really fast, you should use
| something like timsort, that detects if the data is just
| the union of two (or a few) sorted parts, so it's faster
| in many cases where the usual sorting methods don't
| detect the sorted initial parts.
| https://en.wikipedia.org/wiki/Timsort
|
| Are you sorting integers? Strings? Ascii-only strings?
| Perhaps the code should detect some of them and run an
| specialized version.
| int_19h wrote:
| This isn't true in general, and it is _especially_ not
| true in the context of language interpreters / VMs.
| otherme123 wrote:
| Not true. Premature optimization is the root of all evil.
| You first write clean code, and then you profile and
| optimize. I refer you to the underlyings of dicts through
| the years (https://www.youtube.com/watch?v=npw4s1QTmPg)
| as an example of that optimization taking years of
| incremental changes. Once you see the current version
| it's easy to claim that you would have get to the current
| and best version in the first place, as obvious as it
| looks in hindsight.
| _the_inflator wrote:
| Fast in which regard? Fast coding? Fast results after
| hitting "run"? ;)
| amelius wrote:
| "Premature optimization is the root of all evil."
|
| -- Donald Knuth
| GrayShade wrote:
| You should probably read the full context around that
| quote, I'm sick and tired of everyone repeating it
| mindlessly:
|
| https://softwareengineering.stackexchange.com/a/80092
|
| > Yet we should not pass up our opportunities in that
| critical 3%. A good programmer will not be lulled into
| complacency by such reasoning, he will be wise to look
| carefully at the critical code; but only after that code
| has been identified.
| winstonewert wrote:
| > You should probably read the full context around that
| quote, I'm sick and tired of everyone repeating it
| mindlessly:
|
| I'm confused. What do you think the context changes? At
| least as I read it, both the short form and full context
| convey the same idea.
| wzdd wrote:
| The quote at least contextualises "premature". As it is,
| premature optimisation is by definition inappropriate --
| that's what "premature" means. The context:
|
| a) gives a rule-of-thumb estimate of how much
| optimisation to do (maybe 3% of all opportunities);
|
| b) explains that _non_ -premature opimisation is not just
| not the root of all evil but actually a good thing to do;
| and
|
| c) gives some information about how to do non-premature
| optimisation, by carefully identifying performance
| bottlenecks after the unoptimised code has been written.
|
| I agree with GP that unless we know what Knuth meant by
| "premature" it is tempting to use this quote to justify
| too little optimisation.
| GrayShade wrote:
| That quote has been thrown around every time in order to
| justify writing inefficient code and never optimizing it.
| Python is 10-1000x slower than C, but sure, let's keep
| using it because premature optimization is the root of
| all evil, as Knuth said. People really love to ignore the
| "premature" word in that quote.
|
| Instead, what he meant is that you should profile what
| part of the code is slow and focus on it first. Knuth
| didn't say you should be fine with 10-1000x slower code
| overall.
| bshipp wrote:
| You certainly can accept that slowdown if the total
| program run-time remains within acceptable limits and the
| use of a rapid prototyping language reduces development
| time. There are times when doing computationally heavy,
| long-running processes where speed is important, but if
| the 1000x speedup is not noticeable to the user than is
| it really a good use of development time to convert that
| to a more optimized language?
|
| As was said, profile, find user-impacting bottlenecks,
| and then optimize.
| winstonewert wrote:
| I would note that the choice of programming language is a
| bit different. Projects are pretty much locked into that
| choice. You've got to decide upfront whether the trade
| off in a rapid prototyping language is good or not, not
| wait until you've written the project and then profile
| it.
| bshipp wrote:
| Certainly, but Python is flexible enough that it readily
| works with other binaries. If a specific function is
| slowing down the whole project, an alternate
| implementation of that function in another language can
| smooth over that performance hurdle. The nice thing about
| Python is that it is quite happy interacting with C or go
| or Fortran libraries to do some of the heavy lifting.
| winstonewert wrote:
| The thing is, when I see people using this quote, I don't
| see them generally using it to mean you should never
| optimize. I think people don't ignore the premature bit
| in general. Now, throwing this quote out there generally
| doesn't contribute to the conversation. But then, I
| think, neither does telling people to read the context
| when the context doesn't change the meaning of the quote.
| DasIch wrote:
| I agree with you, the context changes nothing (and I
| upvoted you for this reason). However programming
| languages and infrastructure pieces like this are a bit
| special, in that optimizations here are almost never
| premature. * Some of the many
| applications relying on these pieces, could almost
| certainly use the speedup and for those it wouldn't be
| premature * The return of investment is massive due
| to the scale * There are tremendous productivity
| gains by increasing the performance baseline because that
| reduces the time people have to spend optimizing
| applications
|
| This is very different from applications where you can
| probably define performance objectives and define much
| more clearly what is and isn't premature.
| winstonewert wrote:
| I don't know about that. Even with your programming
| language/infrastructure you still want to identify the
| slow bits and optimize those. At the end of the day, you
| only have a certain amount of bandwith for optimization,
| and you want to use that where you'll get the biggest
| bang for you buck.
| chaosfox wrote:
| python isn't premature. python is more than 30 years old
| now, python 3 was released more than 10 years go.
| bshipp wrote:
| It's been at least 5 years since I read an angry post
| about the 2 to 3 version change, so I guess it's finally
| been accepted by the community.
| [deleted]
| neysofu wrote:
| i would say this quote does not apply here. VM
| implementations are in the infamous 3% Donald Knuth is
| warning us about.
| maxbond wrote:
| This certainly does not mean, "tolerate absurd levels of
| technical debt, and only ever think about performance in
| retrospect."
| lmm wrote:
| > tolerate absurd levels of technical debt
|
| In my experience it's far more common for "optimizations"
| to be technical debt than the absence of them.
|
| > only ever think about performance in retrospect
|
| From the extra context it pretty much does mean that.
| "but only after that code has been identified" - 99.999%
| of programmers who think they can identify performance
| bottlenecks other than in retrospect are wrong, IME.
| dragonwriter wrote:
| Python was meeting needs well enough to be one of, if not
| the single, most popular language for a considerable time
| and continuing to expand and become dominant in new
| application domains while languages that focussed more
| heavily on performance rose and fell.
|
| And it's got commercial interests willing to throw money
| at performance now because of that.
|
| Seems like the Python community, whether as top-down
| strategy or emergent aggregate of grassroots decisions
| made the right choices here.
| maxbond wrote:
| Python had strengths that drove it's adoption, namely
| that it introduced new ideas about a language's
| accessibility and readability. I'm not sure it was ever
| really meeting the needs of application developers.
| People have been upset about Python performance and how
| painful it is to write concurrent code for a long time.
| The innovations in accessibility and readability have
| been recognized as valuable - and adopted by other
| languages (Go comes to mind). More recently, it seems
| like Python is playing catch-up, bringing in innovations
| from other languages that have become the norm, such as
| asyncio, typing, even match statements.
|
| Languages don't succeed on their technical merit. They
| succeed by being good enough to gain traction, after
| which it is more about market forces. People choose
| Python for it's great ecosystem and the availability of
| developers, and they accept the price they pay in
| performance. But that doesn't imply that performance
| wasn't an issue in the past, or that Python couldn't have
| been even more successful if it had been more performant.
|
| And to be clear, I use Python every day, and I deeply
| appreciate the work that's been put into 3.10 and 3.11,
| as well as the decades prior. I'm not interested in
| prosecuting the decisions about priorities that were made
| in the past. But I do think there are lessons to be
| learned there.
| [deleted]
| oaiey wrote:
| There is a race right now to be more performant. .NET, Java, Go
| already participate, Rust/C++ are anyway there. So to stay
| relevant they have to start participate. .NET went through the
| same some years ago.
|
| And why they were not addressed: because starting a certain
| points, the optimization are e.g. processor specific or non
| intuitive to understand. Making it hard to maintain vs a simple
| straight forward solution.
| int_19h wrote:
| > one wonders why they haven't been a low-hanging fruit over
| the past 25 years.
|
| From the very page you've linked to:
|
| "Faster CPython explores optimizations for CPython. The main
| team is funded by Microsoft to work on this full-time. Pablo
| Galindo Salgado is also funded by Bloomberg LP to work on the
| project part-time."
| ChuckNorris89 wrote:
| Hmm, looks like corporate money gets sh*t done.
| [deleted]
| lmm wrote:
| Yep. That's the thing about the open source dream;
| especially for work like this that requires enough time
| commitment to understand the whole system, and a lot of
| uninteresting grinding details such that very few people
| would do it for fun, you really need people being funded to
| work on it full-time (and a 1-year academic grant probably
| doesn't cut it), and businesses are really the only source
| for that.
| tambourine_man wrote:
| That's something that has always puzzled me as well.
|
| Given the popularity, you'd think Python would have had several
| generations of JITs by now and yet it still runs interpreted,
| AFAIK.
|
| JavaScript has proven that any language can be made fast given
| enough money and brains, no matter how dynamic.
|
| Maybe Python's C escape hatch is so good that it's not worth
| the trouble. It's still puzzling to me though.
| dragonwriter wrote:
| > JavaScript has proven that any language can be made fast
| given enough money and brains,
|
| Yeah but commercial Smalltalk proved that a long time before
| JS did. (Heck, back when it was maintained, the fastest Ruby
| implementation was built on top of a commercial Smalltalk
| system, which makes sense given they have a reasonably
| similar model.)
|
| The hard part is that "enough money" is...not a given,
| especially for noncommercial projects. JS got it because
| Google decided JavaScript speed was integral to it's business
| of getting the web to replace local apps. Microsoft recently
| developed sufficient interest in Python to throw some money
| at it.
| tambourine_man wrote:
| Yes, I'm aware of the Smalltalk miracle.
|
| Google wasn't alone in optimizing JS, it actually came
| late, Safari and Firefox were already competing and
| improving their runtime speeds, though V8 did doubled down
| on the bet of a fast JS machine.
|
| The question is why there isn't enough money, given that
| there obviously is a lot of interest from big players.
| dragonwriter wrote:
| > The question is why there isn't enough money, given
| that there obviously is a lot of interest from big
| players.
|
| I'd argue that there wasn't actually much interest until
| recently, and that's because it is only recently that
| interest in the CPython ecosystem has intersected with
| interest in speed that has money behind it, because of
| the sudden broad relevance to commercial business of the
| Python scientific stack for data science.
|
| Both Unladen Swallow and IronPython were driven by
| interest in Python as a scripting language in contexts
| detached from, or at least not necessarily attached to,
| the existing CPython ecosystem.
| tambourine_man wrote:
| Probably. I'm not a heavy Python user.
|
| A better question would be to ask where Python is slow
| today and if it matters to big business, then.
|
| It rules in AI, for instance. Is it still mainly glue
| code for GPU execution and performance isn't at all
| critical there?
| everforward wrote:
| > Maybe Python's C escape hatch is so good that it's not
| worth the trouble.
|
| Even if it wasn't good, the presence of it reduces the
| necessity of optimizing the runtime. You couldn't call C code
| in the browser at all until WASM; the only way to make JS
| faster was to improve the runtime.
|
| > JavaScript has proven that any language can be made fast
| given enough money and brains, no matter how dynamic.
|
| JavaScript also lacks parallelism. Python has to contend with
| how dynamic the language is, as well as that dynamism
| happening in another thread.
|
| There are some Python variants that have a JIT, though they
| aren't 100% compatible. Pypy has a JIT, and iirc, IronPython
| and JPython use the JIT from their runtimes (.Net and Java,
| respectively).
| 323 wrote:
| JavaScript doesn't really have C extensions like Python has.
|
| PyPy did implement a JIT for Python, and it worked really
| well, until you tried to use C extensions.
| tambourine_man wrote:
| That's what I meant, if JavaScript hadn't been trapped in
| the browser and allowed to call C, maybe there wouldn't
| have been so much investment in making it fast.
|
| But still, it's kind of surprising that PHP has a JIT and
| Python doesn't (official implementation I mean, not PyPy).
| int_19h wrote:
| Python has a lot more third-party packages that are
| written in native code. PHP has few partly because it
| just isn't used much outside of web dev, and partly
| because the native bits that might be needed for web
| devs, like database libraries, are included in the core
| distro.
| gfd wrote:
| Did anyone try running this benchmark with other python
| implementations? (Pypy etc)
| pen2l wrote:
| Too many things with Pypy (and other supposedly faster
| implementations) don't work, so it's a no-go for a lot of
| projects to begin with.
|
| In the last 10 years, I looked into pypy I think three or four
| times to speed things up, it didn't work even once. (I don't
| remember exactly what it was that didn't play nice... but I'm
| thinking it must have been pillow|numpy|scipy|opencv|plotting
| libraries).
| alexfromapex wrote:
| One of the biggest features I'm looking forward to is the more
| specific error messages. I can't tell you how much time I've
| wasted with cryptic errors that just point to a line that has a
| list comprehension or similar.
| bilsbie wrote:
| When is that coming? Yes the error messages are confusing in a
| big list comprehension.
| agumonkey wrote:
| In a way it's a nice incentive not to nest too much (I have
| that tendency too)
| yantrams wrote:
| Hold my beer!
|
| https://i.redd.it/8waggyjyyle51.png
| wvh wrote:
| I don't think this is bad code. Maybe not the most
| Pythonic in the imperative sense, but certainly you could
| come across such constructs in languages that are more
| oriented towards functional programming. This could also
| be solved in a more readable manner by having a generator
| pipeline, though it would be a good idea to see what the
| performance of chained generators is like in your version
| and flavour of Python.
| ChadNauseam wrote:
| Hold mine :D https://github.com/anchpop/genomics_viz/blob
| /master/genomics...
|
| That's one expression because it used to be part of a
| giant comprehension, but I moved it into a function for a
| bit more readability. I'm considering moving it back just
| for kicks though.
|
| My philosophy is: if you're only barely smart enough to
| code it, you aren't smart enough to debug it. Therefore,
| you should code at your limit, to force yourself to get
| smarter while debugging
| int_19h wrote:
| Yours is nice and readable. Parents' one is not, but it
| feels like the indentation is deliberately confusing. I'd
| lay it out like so: tags = list(set([
| nel for subli in [ mel for
| subl in [ [[jel.split('/')[2:] for jel in
| el] for el in classified ]
| for mel in subl ] for nel in subli
| if nel ]))
|
| Still not very readable, tho. But that's largely due to
| Python's outputs-first sequence comprehension syntax
| being a mess that doesn't scale at all.
|
| Side note: one other thing I always hated about those
| things in Python is that there's no way to bind an
| intermediary computation to a variable. In C# LINQ, you
| can do things like: from x in xs
| let y = x.ComputeSomething() where y.IsFoo &&
| y.IsBar select y
|
| In Python, you have to either invoke ComputeSomething
| twice, or hack "for" to work like "let" by wrapping the
| bound value in a container: y for
| x in xs for y in [x.ComputeSomething()] if
| y.IsFoo and y.IsBar
|
| It's not just about not repeating yourself or not running
| the same code twice, either - named variables are
| themselves a form of self-documenting code, and a
| sequence pipeline using them even where they aren't
| strictly needed can be much more readable.
| texaslonghorn5 wrote:
| Does the new walrus := in python solve your problem?
| roelschroeven wrote:
| The official name is assignment expression, which can be
| good to know to find documentation. Here's the PEP:
| https://peps.python.org/pep-0572/
| agumonkey wrote:
| That code is fine in my book, there's more static code
| than for-comp and it's at work 3 level deep with no hard
| coupling between levels. Also well spaced and worded.
| alexfromapex wrote:
| Will be in the 3.11 release:
| https://docs.python.org/3.11/whatsnew/3.11.html#new-features
| psalminen wrote:
| Big list comprehensions cause confusion in many ways. KISS is
| crucial with them.
| mcronce wrote:
| I agree. The rule of thumb I follow is that if the list
| comp doesn't fit on one line, whatever I'm doing is
| probably too complicated for a list comp.
| teekert wrote:
| Whenever I nest a list comprehension I feel dirty anyway, I
| wonder if a better error message will help solve something
| that is just hard to grasp sometimes.
| 8organicbits wrote:
| "Debugging is twice as hard as writing the code in the
| first place. Therfore, if you write the code as cleverly as
| possible, you are, by definition, not smart enough to debug
| it." - Rajanand
|
| Although, personally, I enjoy python list comprehensions.
| timoteostewart wrote:
| I believe that quote about debugging being twice as hard
| as coding is by Brian Kernighan.
|
| https://en.m.wikiquote.org/wiki/Brian_Kernighan
| 8organicbits wrote:
| Maybe, I grabbed the first matching quote I found. I
| can't attribute it on my own.
| timoteostewart wrote:
| No sweat. I'm a quote nerd so it jumped out at me
| mikepurvis wrote:
| Not even with nesting-- as soon as it's long enough that
| I'm breaking it into multiple lines, I'm immediately like,
| okay this is long enough to just be a normal for-loop, or
| _maybe_ a generator function. But it doesn 't need to be a
| list comprehension any more.
| polski-g wrote:
| All list comprehensions should error with message "FATAL
| PARSE ERROR: Will you really understand this in six
| months?"
| wheelerof4te wrote:
| List comprehensions should be used for making quick lists
| that contain simple calculations or filters.
|
| Anything more will come back to bite you later.
| TremendousJudge wrote:
| If people complain about Rust's borrow checker already...
| dec0dedab0de wrote:
| This is the first time I ever heard someone complain about
| Python's error messages in the 10+ years I've been using it.
| Even people who just learned it, pick up reading tracebacks
| after about a day of practice. The only problem I ever see is
| if a library swallows too much, and gives a generic error, but
| that's not something the language can fix. I really hope they
| don't change things too much.
| e-master wrote:
| I only rarely delve into the python world, but as a .net
| developer I always found it odd and confusing that the error
| message comes after the stack trace in python. I don't see
| people complaining about it, so maybe it's just a matter of
| habit?
| actuallyalys wrote:
| I don't think they're that bad, but I think the bar has
| (fortunately) become higher.
| joshmaker wrote:
| The new error messages are much better. Take a look at this
| example: Traceback (most recent call last):
| File "calculation.py", line 54, in result
| = (x / y / z) * (a / b / c) ~~~~~~^~~
| ZeroDivisionError: division by zero
|
| In this new version it's now obvious which variable is
| causing the 'division by zero' error.
|
| https://docs.python.org/3.11/whatsnew/3.11.html#enhanced-
| err...
| bredren wrote:
| For a lot of folks dealing with getting something working,
| its way more useful to have the most likely spot where the
| error is occurring spit out for them like this.
|
| Stack trace is useful, especially in understanding how the
| code is working in the system.
|
| But if the goal is solve the problem with the code you've
| been working on, existing traces are way too verbose and if
| anything add noise or distract from getting to productive
| again.
|
| I could see tracebacks get swifty terminal UI that shows
| only the pinpointed error point that can be accordion'd out
| to show the rest.
| CobrastanJorji wrote:
| I've been learning Rust recently. There are a number of things
| about the language that I dislike, but its error messages are
| an absolute joy. They clearly put an incredible amount of
| effort into them, and it really shows.
| jmcgough wrote:
| I wrote 2-3 of those and I really wish more languages had a
| similar approach with their standard library.
|
| When you get an error there's a verbose explanation you can
| ask for, which describes the problem, gives example code and
| suggests how you can fix it. The language has a longer ramp-
| up period because it contains new paradigms, so little
| touches like this help a lot in onboarding new devs.
| dralley wrote:
| The exception is IoError, which doesn't tell you which file
| failed to be opened/closed/read/whatever. I know this is
| because doing so would require an allocation, but it's still
| painful.
| estebank wrote:
| The GP comment is referring to the compile time diagnostics
| and not the runtime diagnostics, which certainly could do
| with some out-of-the-box improvements but that you can
| extend in your own code (as opposed to compile errors which
| crate authors have little control over, at least for now).
| rowanG077 wrote:
| huh a stack trace should tell you that.
| tialaramex wrote:
| Your parent is thinking about the compiler errors, whereas
| std::io::Error (which I'm guessing you mean) is an Error
| type your software can get from calling I/O functions at
| runtime.
|
| To be fair, if decorating the error with information about
| a filename is what you needed, since Rust's Error types are
| just types nothing stops you making _your_ function 's
| Error type be the tuple (std::io::Error, &str) if you
| _have_ a string reference, or (though this seems terribly
| inappropriate in production code) leaking a Box to make a
| static reference from a string whose lifetime isn 't
| sufficient.
| mixmastamyk wrote:
| There have been a number of release blockers recently, and
| growing even as others fixed:
|
| https://mail.python.org/archives/list/python-dev@python.org/...
|
| Folks not desperate for the improvements might want before
| jumping in.
| londons_explore wrote:
| This looks like incremental performance work rather than a
| ground-up new approach, like an optimising compiler or JIT...
| int_19h wrote:
| Python JITs were tried before, more than once. The usual
| problem there is that the gains are very modest unless you can
| also break the ABI for native modules. But if you do the
| latter, most users won't even bother with your version.
| cglace wrote:
| I believe a JIT is planned for 3.12
| imwillofficial wrote:
| That's wild, do we typically see such gains in dot releases? I
| don't remember the last time this happened.
|
| Great news.
| wheelerof4te wrote:
| Python's versioning has been misleading ever since 3.0 shipped.
|
| There was a heap of features added between 3.5 and 3.11, for
| example. Enough to make it a completely different language.
| dswalter wrote:
| Python dot releases are significant releases. Perhaps you
| recall the last python integer version number change?
|
| As a result, it will be quite some time before we see a version
| 4.
| dagmx wrote:
| Programming language dot releases can tend to be significant.
|
| Semver often means that major is the language version, and
| minor is the runtime version
| luhn wrote:
| To nitpick a little bit, Python does not follow semver and
| does not claim to follow semver. Minor versions are _mostly_
| backwards compatible, but do include breaking changes.
| tasubotadas wrote:
| Cool improvement but changes very little when Python is x100
| times slower than other GC languages.
| fmakunbound wrote:
| This was my immediate thought when I looked at the numbers, but
| I didn't want to be "that guy". I think if all you ever work in
| is Python, it's still nice even if it is a tiny improvement.
| cglace wrote:
| The goal with faster cpython is for small compounding
| improvements with each point release[0]. So in the end it
| should be much more than a tiny improvement.
|
| [0] https://github.com/markshannon/faster-
| cpython/blob/master/pl...
| unethical_ban wrote:
| It changes a lot when that doesn't matter because an org only
| uses python for their ops automation.
|
| Doing side by side comparisons between golang and python on
| Lambda last year, we halved the total execution time one a
| relatively simple script. Factor of 100, I assume, is an
| absolute best case.
| 323 wrote:
| It will still improve the data center power bill of thousands
| of companies, while saying to them "just Rewrite it in Rust"
| will not.
| timeon wrote:
| > other GC languages.
|
| Does not sounds like Rust.
|
| (But other than that I agree with your point.)
| woodruffw wrote:
| This is probably the wrong comparison to make: Python is two
| orders of magnitude slower than _compiled_ GC languages, but it
| 's in the same order of magnitude as most other _interpreted_
| GC languages.
|
| (It actually changes a whole lot, because there's a whole lot
| of code already out there written in Python. 25% faster is
| still 25% faster, even if the code _would have been_ 100x
| faster to begin with in another language.)
| bufferoverflow wrote:
| JS is interpreted, and it's much much faster than Python.
| Wohlf wrote:
| I thought Python was slower than many other interpreted
| languages but looking at some benchmark it turns out I was
| wrong.
|
| It's about on par with Ruby, only Lua and JIT compiled
| runtimes beat it (for very understandable reasons).
| cutler wrote:
| Javascript spanks other scripting languages.
| TuringTest wrote:
| _> 25% faster is still 25% faster, even if the code would
| have been 100x faster to begin with in another language.)_
|
| And that's even assuming that the code would have existed at
| all in another language. The thing about interpreted GC
| languages is that the iterative loop of creation is much more
| agile, easier to start with and easier to prototype in than a
| compiled, strictly typed language.
| z3c0 wrote:
| I think this is the best explanation for its popularity.
| Sure, it runs slow, but it's very fast to write. The latter
| is usually more important in the world of business, for
| better or worse.
| jonnycomputer wrote:
| The majority of code I write, the efficiency of the code
| is mostly a secondary concern; it will be run only a few
| times at most, and the fact that its a a few 1000 times
| slower than something written in C or C++ means waiting a
| few more seconds (at most) for my results. Most of my
| time spent is writing it, not running it.
| speedgoose wrote:
| Python being interpreted is the main reason why it's slow,
| but not the excuse to not compare it to similar and faster
| programming languages.
| woodruffw wrote:
| You're welcome to make the comparison. I'm just pointing
| out that it's a sort of category error, beyond the
| relatively bland observation of "interpretation is slow."
| [deleted]
| tasubotadas wrote:
| >Python being interpreted is the main reason why it's slow
|
| Common Lisp and Java begs to disagree.
| wheybags wrote:
| Jvm at least is jitted, not interpreted. Dunno anything
| about common lisp.
| lispm wrote:
| Common Lisp has several compilers generating machine code
| AOT. See for example http://www.sbcl.org
| AlexTWithBeard wrote:
| One of the biggest contributors in Python's (lack of) speed
| is dynamic typing. While a jump is a jump and an assignment
| is an assignment, an addition is more like "hmm... what is
| a left operand... is it double? what is a right operand?
| wow, it's also a double! okay, cpu add eax, ebx".
| [deleted]
| gpderetta wrote:
| There are a lot of dynamically typed languages that are
| significantly faster than python. Late binding issues can
| be effectively worked around.
| AlexTWithBeard wrote:
| Do you have an example of a dynamically typed language
| where, say, addition of two lists of doubles would be
| significantly faster than in Python?
| bobbylarrybobby wrote:
| Doesn't that sort of operation become very fast in JS
| after a few runs?
| fmakunbound wrote:
| Probably most Common Lisp implementations.
| Sesse__ wrote:
| LuaJIT.
| [deleted]
| gpderetta wrote:
| You mean concatenating the lists or pairwise addition?
|
| I don't expect the former to be slow in python as it
| would be a primitive implemented in C (although even a
| simple lisp interpreter can have an advantage here by
| just concatenating the conses).
|
| For the latter, any language runtime capable of inference
| should be able to optimize it.
| skruger wrote:
| Racket
| samatman wrote:
| I would expect this microbenchpark in particular to be as
| fast in LuaJIT as it would be in C, without the risk of
| undefined behavior if the array boundaries are improperly
| calculated.
| staticassertion wrote:
| Also "what is equals?". That's why people will do:
| def fooer(i: str, strings: list[str]): push =
| str.append for s in strings:
| push(i, s)
|
| Apparently this helps the interpreter understand that
| `append` is not getting overwritten in the global scope
| elsewhere.
| wheelerof4te wrote:
| This is one of the most common _optimizations_ in Python,
| simply because the lookup code for modules /class
| instances is horrible and slow.
| staticassertion wrote:
| It'd be nice if there were a tool you could run over your
| code that did this sort of thing. Even if it can
| technically break shit in absurd cases, I'd totally run
| it on my code, which isn't absurd.
| Sesse__ wrote:
| If it's double, it probably would be addss xmm0, xmm1 :-)
| (add eax, ebx would be for 32-bit integers.)
| kevin_thibedeau wrote:
| You can completely bypass the interpreter by running plain,
| unannotated Python code through Cython. You get a speedup
| but it is still slow from manipulating PyObjects and
| attribute lookups.
| ianbicking wrote:
| Python is compiled to a VM (that is CPython). The semantics
| and implementation of the VM haven't prioritized performance
| as much as some other systems. Here we're seeing improvements
| in the implementation. But the semantics will be the hard
| part, as those semantics limit the performance. For instance
| "a + b" is (I believe) compiled into bytecodes that pretty
| much follow the expression. But the implementation of "add"
| has to take into account __add__() and the possible function
| calls and exceptions and return types that implies. If you
| compiled that all down to machine code (which people have
| done) you'd still have to run all that code to check types
| and the existence of methods and so on.
| coliveira wrote:
| Dynamic languages like Javascript have solved this problem
| by essentially caching the resolution of a particular
| expression that is executed very often. I don't see why
| this cannot be done in Python.
| mixmastamyk wrote:
| The expressions are dynamic, so they have to be evaluated
| every time.
|
| Python is excessively dynamic, so it can't
| (conventionally) be sped up as easily as many other
| languages unfortunately. Finally some folks are being
| paid and allowed to be working on it.
| coliveira wrote:
| I don't buy this. There are many contexts in which a
| smart JIT compiler can detect that an expression cannot
| be modified. Especially since, due to the GIL, python
| code is mostly non-threaded. They just didn't spend
| enough time to do the hard work that people spent on
| Javascript.
| dbrueck wrote:
| > can detect that an expression cannot be modified
|
| Can you give some examples? I use Python a lot and it's
| absurdly dynamic. I've sometimes wondered if there'd be
| some way to let me as the programmer say "dynamicness of
| this module is only 9 instead of the default 11" but I'm
| skeptical of a tool's ability to reliably deduce things.
| Here's just one example: a = 0 for
| i in range(10): a += 1 log('A is', a)
|
| In most languages you could do static analysis to
| determine that 'a' is an int, and do all sorts of
| optimizations. In Python, you could safely deduce that
| 99.9999% of the time it's an int, but you couldn't
| _guarantee_ it because technically the 'log' function
| could randomly be really subversive and reach into the
| calling scope and remap 'a' to point to some other int-
| like object that behaves differently.
|
| Would "legitimate" code do that? Of course not, but it's
| technically legal Python and just one example of the
| crazy level of dynamic-ness, and a tool that
| automatically optimizes things would need to preserve the
| correctness of the program.
|
| EDIT: tried to fix code formatting
| dagmx wrote:
| I feel this is a bit nihilistic.
|
| Python is used in so many different areas, in some very
| critical roles.
|
| Even a small speedup is going to be a significant improvement
| in metrics like energy use.
|
| The pragmatic reality is that very few people are going to
| rewrite their Python code that works but is slow into a
| compiled language when speed of execution is trumped by ease of
| development and ecosystem.
| woodruffw wrote:
| This is really incredible work.
|
| The "What's New" page has an item-by-item breakdown of each
| performance tweak and its measured effect[1]. In particular, PEP
| 659 and the call/frame optimizations stand out to me as very
| clever and a strong foundation for future improvements.
|
| [1]: https://docs.python.org/3.11/whatsnew/3.11.html
| [deleted]
| ineedasername wrote:
| If it's 25% faster then the point versions should be (rounded)
| 3.88.
| sod wrote:
| I wish there was something like llvm for scripting languages.
| Imagine if python, php, javascript, dart or ruby would not
| interpret the code themself, but compile to an interpretable
| common language, where you could just plug in the fastest
| interpreter there is for the job.
| samuelstros wrote:
| Is that not where WebAssembly is headed? Taken from their
| website "Wasm is designed as a portable compilation target for
| programming languages [...]"
| dragonwriter wrote:
| Not really. Scripting languages implemented on WASM are doing
| it by compiling their main interpreter to WASM, not the
| source that the interpreter would execute.
| Starlevel001 wrote:
| This is called the JVM and everyone has spent the last twenty
| years getting really upset about it.
| mathisonturing wrote:
| > everyone has spent the last twenty years getting really
| upset about it
|
| Could you expand on this?
| staticassertion wrote:
| https://github.com/mypyc/mypyc
|
| > Mypyc compiles Python modules to C extensions. It uses
| standard Python type hints to generate fast code. Mypyc uses
| mypy to perform type checking and type inference.
| maxloh wrote:
| You can't compile to an executable with mypyc, just native C
| extensions.
| staticassertion wrote:
| My bad, thanks for clarifying.
| int_19h wrote:
| https://en.wikipedia.org/wiki/Parrot_virtual_machine
| dekhn wrote:
| In a sense the xla compiler for tensorflow and other python
| based machine learning systems is exactly this. And mlir is
| based on llvm. I predict there will be a second generation of
| such systems that are more general than mlir and compile large
| scale computations that work on out of core data into efficient
| data flow applications that run at native speed
| roelschroeven wrote:
| Is Perl's Parrot not meant to be something like that?
| dragonwriter wrote:
| Parrot [0] was meant to be that and the main VM for Perl6
| (now Raku) and failed at both.
|
| [0] http://www.parrot.org/ states:
|
| The Parrot VM is no longer being actively developed.
|
| Last commit: 2017-10-02
|
| The role of Parrot as VM for Perl 6 (now "Raku") has been
| filled by MoarVM, supporting the Rakudo compiler.
|
| [...]
|
| Parrot, as potential VM for other dynamic languages, never
| supplanted the existing VMs of those languages.
| latenightcoding wrote:
| Raku uses MoarVM
| dragonwriter wrote:
| Yes, the quote from the Parrot website I provided says
| exactly that: the role Parrot intended to fill for Perl6
| has been filled by MoarVM supporting the Rakudo compiler.
| dragonwriter wrote:
| There have been many attempts to do this, but the problem seems
| to be that a good target for one scripting language isn't
| necessarily a good target for another one with a different
| model, and the interest in a common backend isn't strong enough
| to overcome that.
|
| (Though JS had some potential to do that by the power of it's
| browser role, before compiling interpreters to WASM became a
| better way of getting browser support than compiling another
| scripting language source to JS.)
| cmrdporcupine wrote:
| Arguably the JVM has been the most successful variant of this
| type of thing. It's not a bad generic higher level language
| target overall.
|
| It's just it also has sooo much historical baggage and
| normative lifestyle assumptions along with it, and now it's
| passed out of favour.
|
| And historically support in the browser was crap and done at
| the wrong level ("applets") and then that became a dead end
| and was dropped.
| jacobn wrote:
| JVM and GraalVM?
|
| https://www.graalvm.org/
| IceHegel wrote:
| Is there a reason python hasn't already done this?
| NelsonMinar wrote:
| The Java VM has been that in the past with projects like Jython
| being the bridge. But it's never worked out very well in
| practice.
| MarkSweep wrote:
| The RPython language PyPy is written in is designed to enable
| writing JIT-compiled interceptors for any language:
|
| https://rpython.readthedocs.io/en/latest/
|
| There is also the Dynamic Language Runtime, which is used to
| implement IronPython and IronRuby on top of .NET:
|
| https://en.wikipedia.org/wiki/Dynamic_Language_Runtime
| rich_sasha wrote:
| The issue is that Python is sort of necessarily slow. Most code
| translates 1:1 to straightforward C but of course it is
| possible to create some monstrosity where, after the 3rd time
| an exception is thrown, all integer arithmetics change and
| string addition turns into 'eval'.
|
| All approaches to "make Python fast again" tend to lock down
| some of the flexibility so aren't realy general purpose.
| nestorD wrote:
| This is already a thing, LLVM has a JIT interface which is used
| by the Julia interpreter.
| Aperocky wrote:
| That's why they are called scripting languages.
| chickenpotpie wrote:
| Webassembly can fulfill that requirement for certain use cases
| cmrdporcupine wrote:
| I don't think that was the original intent of WASM nor what
| its VM is designed for, even if people are now attempting to
| make it work in this fashion.
|
| It was designed for "native-like" execution in the browser
| (or now other runtimes.) It's more like a VM for something
| like Forth than something like Python.
|
| It's a VM that runs at a much lower level than the ones you'd
| find inside the interpreter in e.g. Python:
|
| No runtime data model beyond primitive types - nothing beyond
| simple scalar types; no strings, no collections, no maps. No
| garbage collection, and in fact just a simple "machine-like"
| linear memory model
|
| And in fact this is the point of WASM. You can run compiled C
| programs in the browser or, increasingly, in other places.
|
| Now, are people building stuff like this overtop of WASM?
| Yes. Are there moves afoot to add GC, etc.? Yes. Personally
| I'm still getting my feet wet with WebAssembly, so I'm not
| clear on where these things are at, but I worry that trying
| to make something like that generic enough to be useful for
| more than just 1 or 2 target languages could get tricky.
|
| Anyways it feels like we've been here before. It's called the
| JVM or the .NET CLR.
|
| I like WASM because it's fundamentally more flexible than the
| above, and there's good momentum behind it. But I'm wary
| about positioning it as a "generic high level language VM."
| You can build a high-level language VM _on top of it_ but
| right now it 's more of a target for portable C/C++/Rust
| programs.
| [deleted]
| antman wrote:
| There was always a denial of removing the Global Interpreter Lock
| because it would decrease single threaded Python speed for which
| most people din't care.
|
| So I remember a guy recently came up with a patch that both
| removed GIL and also to make it easier for the core team to
| accept it he added also an equivalent number of optimizations.
|
| I hope this release was is not we got the optimizations but
| ignored the GIL part.
|
| If anyone more knowledgable can review this and give some
| feedback I think will be here in HN
| lrtap wrote:
| > I hope this release was is not we got the optimizations but
| ignored the GIL part.
|
| I would not be surprised. It is highly likely that the
| optimizations will be taken, credit will go to the usual people
| and the GIL part will be extinguished.
| didip wrote:
| I agree, please don't just accept the optimization and sweep
| the GIL removal under the rug again.
| int_19h wrote:
| Removing GIL also breaks existing native packages, and would
| require wholesale migration across the entire ecosystem, on a
| scale not dissimilar to what we've seen with Python 3.
| smcl wrote:
| The various trade-offs and potential pitfalls involved in
| removing the GIL are by now very well known, not just here on
| HN but among the Python core developers who will ultimately
| do the heavy lifting when it comes to doing this work.
| icegreentea2 wrote:
| They believe that the current nogil approach can allow the
| vast majority of c-extension packages to adapt with a re-
| compile, or at worst relatively minor code changes (they
| thought it would be ~15 lines for numpy for example).
|
| Since c-extension wheels are basically built for single
| python versions anyways, this is potentially manageable.
| [deleted]
| jart wrote:
| I use a beautiful hack in the Cosmopolitan Libc codebase (x86
| only) where we rewrite NOPs into function calls at runtime for
| all locking operations as soon as clone() is called.
| https://github.com/jart/cosmopolitan/blob/5df3e4e7a898d223ce...
| The big ugly macro that makes it work is here
| https://github.com/jart/cosmopolitan/blob/master/libc/intrin...
| An example of how it's used is here.
| https://github.com/jart/cosmopolitan/blob/5df3e4e7a898d223ce...
| What it means is that things like stdio goes 3x faster if
| you're not actually using threads. The tradeoff is it's
| architecture specific and requires self-modifying code. Maybe
| something like this could help Python?
| pm wrote:
| I would love to see this implemented purely for curiosity's
| sake, even if it's architecture-specific.
|
| Personally, cosmo is one of those projects that inspires me
| to crack out C again, even though I was never understood the
| CPU's inner workings very well, and your work in general
| speaks to the pure joy that programming can be as an act of
| creation.
|
| Thanks for all your contributions to the community, and
| thanks for being you!
| nmstoker wrote:
| I think you mean the work by Sam Gross:
|
| https://github.com/colesbury/nogil/
|
| Interesting article about it here:
|
| https://lukasz.langa.pl/5d044f91-49c1-4170-aed1-62b6763e6ad0...
| BerislavLopac wrote:
| The GIL removal by that guy reverted some of the improvement
| done by other optimisations, so the overall improvement was
| much smaller.
|
| And most people _do_ care for single-threaded speed, because
| the vast majority of Python software is written as single-
| threaded.
| metadat wrote:
| > the vast majority of Python software is written as single-
| threaded.
|
| This is a self-fulfilling prophecy, as the GIL makes Python's
| (and Ruby's) concurrency story pretty rough compared to
| nearly all other widely used languages: C, C++, Java, Go,
| Rust, and even Javascript (as of late).
| SiempreViernes wrote:
| Pretty sure it is actually a statement that comes from the
| "python is a scripting language" school, and not because
| the huge horde of programmers that craves concurrency when
| they write a script to reformat a log file to csv keeps
| being put off by the python multiprocessing story.
| metadat wrote:
| Not sure I understand your point, can you clarify? Python
| is used across many different domains, being able to
| really take advantage of multiple cores would be a big
| deal.
|
| I'd really appreciate if python included concurrency or
| parallelism capabilities that didn't disappoint and
| frustrate me.
|
| If you've tried using the thread module, multiprocessing
| module, or async function coloring feature, you probably
| can relate. They can sort of work but are about as
| appealing as being probed internally in a medical
| setting.
| tyingq wrote:
| Getting rid of the GIL will also immediately expose all the
| not-thread-safe stuff that currently exists, so there's a
| couple of waves you would need before it would be broadly
| usable.
| tempest_ wrote:
| Cool, they should start now.
|
| As a python dev, pythons multiprocess/multithreading
| story is one the largest pain points in the language.
|
| Single threaded performance is not that useful while
| processors have been growing sideways for 10 years.
|
| I often look at elixir with jealousy.
| citrin_ru wrote:
| IMHO majority of Python software doesn't use threads
| because it is easier to write single threaded code (for
| many reasons), not because of GIL.
| metadat wrote:
| In Golang you can spawn a green thread on any function
| call with a single keyword: `go'.
|
| The ergonomics are such that it's not difficult to use.
|
| Why can't or shouldn't we have a mechanism comparably
| fantastic and easy to use in Python?
| lmm wrote:
| Because making it easy to write C/C++ extensions that
| work the way you expect (including for things like
| passing a Python callback to a C/C++ library) has always
| been a priority for Python in a way that it isn't for
| Golang?
| liuliu wrote:
| Any C/C++ extensions that wants to enable more efficient
| Python has to learn GIL and how to manipulate that as
| well. Including not limited to: how to give up GIL (so
| that other Python code can progress); how to prepare your
| newly initiated threads to be Python GIL friendly etc.
|
| Personally, GIL is more surprising to me when interop
| with Python.
| Twirrim wrote:
| https://docs.python.org/3/library/concurrent.futures.html
| sort of gives you that, syntax works with threads or
| processes.
| metadat wrote:
| @Twirrim How would you rate the user experience of the
| concurrent.futures package compared to the golang `go'
| keyword?
|
| It is architecturally comparable to async Javascript
| programming, which imho is a shoehorn solution.
| [deleted]
| cercatrova wrote:
| Somewhat related, had anyone used Nim? These days I've been using
| it for its Python like syntax year C like speed, especially if I
| just need to script something quickly.
| int_19h wrote:
| The problem with any new language is libraries and tooling.
| cercatrova wrote:
| I use it for scripting mainly when I don't need Python
| specific libraries like pandas. The tooling is pretty good as
| well.
| objektif wrote:
| Is it worth it to learn? Especially when you have python?
| cercatrova wrote:
| Yeah it's a nice language, it has optional static types as
| well which is always a plus in my experience. The speed
| though is what's really interesting.
| bilsbie wrote:
| I'd be curious to see the speed up compared to python 2.4. To see
| how far it's come since I first got started with it.
| upupandup wrote:
| it means you are not following PEP8
|
| if you get non descriptive errors, it means you haven't
| followed proper exception handling/management
|
| its not the fault of Python but the developer. go ahead and
| downvote me but you if you mentioned parent's comment in an
| interview, you would not receive a call back or at least I hope
| the interviewer is realizing the skill gap.
| uoaei wrote:
| Ah yes, the "why make things easier when you could do things
| the hard way" argument.
| d23 wrote:
| I think this is both insulting _and_ not even responding to
| the right comment.
| bilsbie wrote:
| How so?
| s9w wrote:
| anewpersonality wrote:
| What happened to multithreaded Facebookthon?
| 8organicbits wrote:
| Is this speed a feature of python now, or is this performance
| something that could regress in future if needed for a different
| feature? As in, will python continue to run this fast (or
| faster)?
| [deleted]
| sandGorgon wrote:
| is this project sponsored by microsoft ? it seems that the devs
| in the github org all work for microsoft
| korijn wrote:
| Yes
| bergenty wrote:
| I just learned today that python is 100x slower than C. I have
| severely lost respect for python.
| StevePerkins wrote:
| If you just learned this today, then HN has severely lost
| respect for you! lol
| bergenty wrote:
| Yes it's deserved but I've been in management for a while.
| calibas wrote:
| C is faster than Python, but it's much more nuanced than saying
| Python is "100x slower". Try writing a program to open and
| parse a CSV file line by line in both languages. You'll see why
| people like Python.
|
| Also, because of FFI, it's not like the two languages are
| opposed to each other and you have to choose. They can happily
| work together.
| 4gotunameagain wrote:
| And oranges contain 10x the vitamin C of apples :)
| [deleted]
| superjan wrote:
| Yesterday, I watched Emery Berger's "Python performance matters"
| Which is you should not bother to write fast Python, just
| delegate all heavy lifting to optimized C/c++ and the likes. His
| group has a Python profiler whose main goal is to point out which
| parts should be delegated. Of course optimized is better but the
| numeric microbenchmarks in this post are mostly examples of code
| that is better delegated to low overhead compiled languages.
|
| https://youtu.be/LLx428PsoXw
| ModernMech wrote:
| > you should not bother to write fast Python, just delegate all
| heavy lifting to optimized C/c++ and the likes
|
| Certainly that's something you can do, but unfortunately for
| Python it opens the door for languages like Julia, which are
| trying to say that you can have your cake and eat it too.
| wheelerof4te wrote:
| The key wording is "optimized C".
|
| CPython is already executed in pure C, the only difference it
| being a very slow, unoptimized C.
|
| The code for simple adding of two numbers is insane. It jumps
| through enough hoops to make you wonder how is it even running
| everything else.
| rhabarba wrote:
| If performance is relevant to you, Python is the wrong language
| for you.
| IceHegel wrote:
| Next we need some standardization around package management. I
| recently tried installing the Metal optimized pytorch and
| descended into package and Python path hell.
| tandav wrote:
| Just use conda or docker
___________________________________________________________________
(page generated 2022-07-06 23:00 UTC) |