Skip to main content

More models, more features: what’s new in ‘parameters’ 0.2.0

[This article was first published on R on easystats, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

The easystats project continues to grow, expanding its capabilities and features, and the parameters package 0.2.0 update is now on CRAN.

The primary goal of this package is to provide utilities for processing the parameters of various statistical models. It is useful for end-users as well as developers, as it is a lightweight and open-developed package.

The main function, model_parameters(), can be seen as an alternative to broom::tidy(). However, the package also include many more useful features, some of which are described in our improved documentation:

Improved Support

Besides stabilizing and improving the functions for the most popular models (glm(), glmer(), stan_glm(), psych and lavaan…), the functions p_value(), ci(), standard_error(), standardize() and most importantly model_parameters() now support many more model objects, including mixed models from packages nlme, glmmTMB or GLMMadaptive, zero-inflated models from package pscl, other regression types from packages gam or mgcv, fixed effects regression models from panelr, lfe, feisr or plm, and structural models from FactoMineR.

Improved Printing

For models with special components, in particular zero-inflated models, model_parameters() separates these components for a clearer output.

## # Conditional component
## 
## Parameter   | Coefficient |   SE |         95% CI |     z |      p
## ------------------------------------------------------------------
## (Intercept) |       -0.36 | 0.28 | [-0.90,  0.18] | -1.30 | > .1  
## spp (PR)    |       -1.27 | 0.24 | [-1.74, -0.80] | -5.27 | < .001
## spp (DM)    |        0.27 | 0.14 | [ 0.00,  0.54] |  1.95 | 0.05  
## spp (EC-A)  |       -0.57 | 0.21 | [-0.97, -0.16] | -2.75 | < .01 
## spp (EC-L)  |        0.67 | 0.13 | [ 0.41,  0.92] |  5.20 | < .001
## spp (DES-L) |        0.63 | 0.13 | [ 0.38,  0.87] |  4.96 | < .001
## spp (DF)    |        0.12 | 0.15 | [-0.17,  0.40] |  0.78 | > .1  
## mined (no)  |        1.27 | 0.27 | [ 0.74,  1.80] |  4.72 | < .001
## 
## # Zero-Inflated component
## 
## Parameter   | Coefficient |   SE |         95% CI |     z |      p
## ------------------------------------------------------------------
## (Intercept) |        0.79 | 0.27 | [ 0.26,  1.32] |  2.90 | < .01 
## mined (no)  |       -1.84 | 0.31 | [-2.46, -1.23] | -5.87 | < .001

Join the team

There is still room for improvement, and some new exciting features are already planned. Feel free to let us know how we could further improve this package!

Note that easystats is a new project in active development, looking for contributors and supporters. Thus, do not hesitate to contact one of us if you want to get involved 🙂

  • Check out our other blog posts here!

To leave a comment for the author, please follow the link and comment on their blog: R on easystats.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.


from R-bloggers https://ift.tt/2olxcrx
via IFTTT

Comments

Popular posts from this blog

Explaining models with Triplot, part 1

[This article was first published on R in ResponsibleML on Medium , and kindly contributed to R-bloggers ]. (You can report issue about the content on this page here ) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Explaining models with triplot, part 1 tl;dr Explaining black box models built on correlated features may prove difficult and provide misleading results. R package triplot , part of the DrWhy.AI project, is aiming at facilitating the process of explaining the importance of the whole group of variables, thus solving the problem of correlated features. Calculating the importance of explanatory variables is one of the main tasks of explainable artificial intelligence (XAI). There are a lot of tools at our disposal that helps us with that, like Feature Importance or Shapley values, to name a few. All these methods calculate individual feature importance for each variable separately. The problem arises when features used ...

The con behind every wedding

With her marriage on the rocks, one writer struggles to reconcile her cynicism about happily-ever-after as her own children rush to tie the knot A lavish wedding, a couple in love; romance was in the air, as it should be when two people are getting married. But on the top table, the mothers of the happy pair were bonding over their imminent plans for … divorce. That story was told to me by the mother of the bride. The wedding in question was two summers ago: she is now divorced, and the bridegroom’s parents are separated. “We couldn’t but be aware of the crushing irony of the situation,” said my friend. “There we were, celebrating our children’s marriage, while plotting our own escapes from relationships that had long ago gone sour, and had probably been held together by our children. Now they were off to start their lives together, we could be off, too – on our own, or in search of new partners.” Continue reading... The Guardian http://ift.tt/2xZTguV October 07, 2017 at 09:00AM